Spark SQL how-to in a nutshell

Prior to Spark 2.0.0, the heart of Spark SQL was SchemaRDD, which, as you can guess, associates a schema with an RDD. Of course, internally it does a lot of magic by leveraging the ability to scale and distribute processing and providing flexible storage.

In many ways, data access via Spark SQL is deceptively simple; we mean the process of creating one or more appropriate RDDs by paying attention to the layout, data types, and so on, and then accessing them via SchemaRDDs. We get to use all the interesting features of Spark to create the RDDs: structured data from Hive or Parquet, unstructured data from any source, and the ability to apply RDD operations at scale. Then, you need to overlay the respective schemas to ...

Get Fast Data Processing with Spark 2 - Third Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.