Building a KMeans classifying system in Spark 2.0

In this recipe, we will load a set of features (for example, x, y, z coordinates) using a LIBSVM file and then proceed to use KMeans() to instantiate an object. We will then set the number of desired clusters to three and then use kmeans.fit() to action the algorithm. Finally, we will print the centers for the three clusters that we found.

It is really important to note that Spark does not implement KMeans++, contrary to popular literature, instead it implements KMeans || (pronounced as KMeans Parallel). See the following recipe and the sections following the code for a complete explanation of the algorithm as it is implemented in Spark.

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.