Introduction

The three workhorses of Spark for efficient processing of data at scale are RDD, DataFrames, and the Dataset API. While each can stand on its own merit, the new paradigm shift favors Dataset as the unifying data API to meet all data wrangling needs in a single interface.

The new Spark 2.0 Dataset API is a type-safe collection of domain objects that can be operated on via transformation (similar to RDDs' filter, map, flatMap(), and so on) in parallel using functional or relational operations. For backward compatibility, Dataset has a view called DataFrame, which is a collection of rows that are untyped. In this chapter, we demonstrate all three API sets. The figure ahead summarizes the pros and cons of the key components of Spark ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.