There's more...

There is also a streaming version of KMeans implementation in Spark that allows you to classify the features on the fly. The streaming version of KMeans is covered in more detail in Chapter 13Spark Streaming and Machine Learning Library.

There is also a class that helps you to generate RDD data for KMeans. We found this to be very useful during our application development process:

def generateKMeansRDD(sc: SparkContext, numPoints: Int, k: Int, d: Int, r: Double, numPartitions: Int = 2): RDD[Array[Double]] 

This call uses Spark context to create RDDs while allowing you to specify the number of points, clusters, dimensions, and partitions.

A useful related API is: generateKMeansRDD(). Documentation for generateKMeansRDD can ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.