Chapter 6. Working with Key/Value Data

Like any good distributed computing tool, Spark relies heavily on the key/value pair paradigm to define and parallelize operations, particularly wide transformations that require the data to be redistributed between machines. Anytime we want to perform grouped operations in parallel or change the ordering of records amongst machines—be it computing an aggregation statistic or merging customer records—the key/value functionality of Spark is useful as it allows us to easily parallelize our work. Spark has its own PairRDDFunctions class containing operations defined on RDDs of tuples. The PairRDDFunctions class, made available through implicit conversion, contains most of Spark’s methods for joins, and custom aggregations. The OrderedRDDFunctions class contains the methods for sorting. The OrderedRDDFunctions are available to RDDs of tuples in which the first element (the key) has an implicit ordering.

Note

Similar operations are available on Datasets as discussed in “Grouped Operations on Datasets”.

Despite their utility, key/value operations can lead to a number of performance issues. In fact, most expensive operations in Spark fit into the key/value pair paradigm because most wide transformations are key/value transformations, and most require some fine tuning and care to be performant. These performance considerations will be the focus of this chapter. We hope to provide not just a guide to using the functions in the PairRDDFunctions and ...

Get High Performance Spark now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.