Summary

In this chapter, we explained the motivation behind the development of the DataFrame API in Spark and how development in Spark has become easier than ever. We briefly covered the design aspect of the DataFrame API and how it is built on top of Spark SQL. We discussed various ways of creating DataFrames from different data sources such as RDDs, JSON, Parquet, and JDBC. At the end of this chapter, we just gave you a heads-up on how to perform operations on DataFrames. We will discuss DataFrame operations in the context of data science and machine learning in more detail in the upcoming chapters.

In the next chapter, we will learn how Spark supports unified data access and discuss on Dataset and Structured Stream  components in details.

Get Spark for Data Science now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.