Creating SparkR DataFrames

A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a DataFrame in R, but with rich optimizations. SparkR DataFrames scale to large datasets using the support for distributed computation in Spark. In this recipe, we'll see how to create SparkR DataFrames from different sources, such as JSON, CSV, local R DataFrames, and Hive tables.

Getting ready

To step through this recipe, you will need a running Spark Cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. Also, install RStudio. Please refer to the Installing R recipe for details on the installation of R. Please refer ...

Get Apache Spark for Data Science Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.