How it works...

The data held in the client driver is parallelized and distributed across the cluster using the number of portioned RDDs (the second parameter) as the guideline. The resulting RDD is the magic of Spark that started it all (refer to Matei Zaharia's original white paper).

The resulting RDDs are now fully distributed data structures with fault tolerance and lineage that can be operated on in parallel using Spark framework.

We read a text file A Tale of Two Cities by Charles Dickens from http://www.gutenberg.org/ into Spark RDDs. We then proceed to split and tokenize the data and print the number of total words using Spark's operators (for example, map, flatMap(), and so on).

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.