Job

When an action is called in a Spark application, Spark will use a dependency graph to ascertain the datasets on which that action depends and thereafter formulate an execution plan. An execution plan is essentially a chain of datasets, beginning with the dataset furthest back all the way through to the final dataset, that are required to be computed in order to calculate the value to return to the driver program as a result of that action. This process is called a Spark job, with each job corresponding to one action.

Get Machine Learning with Apache Spark Quick Start Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.