How it works...

When the program started to execute, we initialized a SparkContext in our driver program to start the task of processing the data. This implies that the data must fit in the driver's memory (user's station), which is not a server requirement in this case. Alternative methods of divide and conquer must be devised to deal with extreme datasets (partial retrieval and the assembly at destination).

We continued by loading and parsing the data file into a dataset with the data type of the movies. The movie dataset was then grouped by year, yielding a map of movies keyed by year, with buckets of associated movies attached.

Next, we ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.