How it works...

We created RDD using the SparkContext; this was widely used in Spark 1.x. We also demonstrated a way to create Dataset in Spark 2.0 using the Session object. The conversion back and forth is necessary to deal with pre-Spark 2.0 code in production today.

The technical message from this recipe is that while DataSet is the preferred method of data wrangling going forward, we can always use the API to go back and forth to RDD and vice versa.

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.