- Start a new project in IntelliJ or in an IDE of your choice. Make sure that the necessary JAR files are included.
- Set up the package location where the program will reside:
package spark.ml.cookbook.chapter4
- Import the necessary packages for SparkContext to get access to the cluster:
import org.apache.spark.mllib.linalg.Vectorsimport org.apache.spark.sql.SparkSessionimport org.apache.spark.mllib.clustering.KMeans
- Create Spark's configuration and SparkContext:
val spark = SparkSession.builder.master("local[*]") // if use cluster master("spark://master:7077").appName("myPMMLExport").config("spark.sql.warehouse.dir", ".").getOrCreate()
- We read the data from a text file; the data file contains a sample dataset for a KMeans ...