Apache Spark 2.3

As described in Chapter 1, The Big Data Ecosystem, Apache Spark is a general purpose distributed processing engine that is capable of performing data transformations, advanced analytics, machine learning, and graph analytics at scale over petabytes of data. Apache Spark can be deployed either in standalone mode (meaning that we utilize its in-built cluster manager) or integrated with other third-party cluster managers including Apache YARN and Apache Mesos.

In the case of our single-node development cluster, we will deploy Apache Spark in standalone mode where our single-node will host both the Apache Spark Standalone Master server and a single worker node instance. Since Spark software services are designed to run in a JVM, ...

Get Machine Learning with Apache Spark Quick Start Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.