Now that we have looked at some data properties, the next task is to do some preprocessing, such as cleaning, before getting the training set. For this part, use the Preprocessing.scala file. For this part, the following imports are required:
import org.apache.spark.ml.feature.{ StringIndexer, StringIndexerModel}import org.apache.spark.ml.feature.VectorAssembler
Then we load both the training and the test set as shown in the following code:
var trainSample = 1.0 var testSample = 1.0 val train = "data/insurance_train.csv" val test = "data/insurance_test.csv" val spark = SparkSessionCreate.createSession() import spark.implicits._ println("Reading data from " + train + " file") val trainInput = spark.read .option("header", ...