Step 2 - Reading and parsing the dataset

Let's use Spark's textFile method to read a text file from your preferred storage such as HDFS or the local filesystem. However, it's up to us to specify how to split the fields. While reading the input dataset, we do groupBy first and transform after the join with a flatMap operation to get the required fields:

val TRAIN_FILENAME = "data/ua.base" 
val TEST_FIELNAME = "data/ua.test" 
val MOVIES_FILENAME = "data/u.item" 
 
  // get movie names keyed on id 
val movies = spark.sparkContext.textFile(MOVIES_FILENAME) 
    .map(line => { 
      val fields = line.split("\|") 
      (fields(0).toInt, fields(1)) 
    }) 
val movieNames = movies.collectAsMap() 
  // extract (userid, movieid, rating) from ratings data 
val ratings = spark.sparkContext.textFile(TRAIN_FILENAME) ...

Get Scala Machine Learning Projects now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.