How it works...

The basic workflow for DataFrame using SQL is to first populate the DataFrame either through internal Scala data structures or via external data sources first, and then use the createOrReplaceTempView() call to register the DataFrame as a SQL-like artifact.

When you use DataFrames, you have the benefit of additional metadata that Spark stores (whether API or SQL approach) which can benefit you during the coding and execution.

While RDDs are still the workhorses of core Spark, the trend is toward the DataFrame approach which has successfully shown its capabilities in languages such as Python/Pandas or R.

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.