How it works...

We began by loading and parsing the data file into a dataset with the data type ratings, and finally converted it to a DataFrame. The DataFrame was then used to execute a Spark SQL query that grouped all the ratings by user with their totals.

We explored Dataset/DataFrame in Chapter 3, Spark's Three Data Musketeers for Machine Learning - Perfect Together, but we encourage the user to refresh and dig deeper into the Dataset/DataFrame API. A full understanding of the API and its concepts (lazy instantiation, staging, pipelining, and caching) is critical for every Spark developer.

Finally, we passed the result set of data to the ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.