When the program started to execute, we initialized a SparkContext in our driver program to start the task of processing the data. This implies that the data must fit in the driver's memory (user's station), which is not a server requirement in this case. Alternative methods of divide and conquer must be devised to deal with extreme datasets (partial retrieval and the assembly at destination).
We continued by loading and parsing the data file into a dataset with the data type of the movies. The movie dataset was then grouped by year, yielding a map of movies keyed by year, with buckets of associated movies attached.
Next, we ...