Perform real-time analytics using Spark in a fast, distributed, and scalable way
Spark is a framework used for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does, but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and built-in tools for interactive query analysis (Spark SQL), large-scale graph processing and analysis (GraphX), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big datasets.
Fast Data Processing with Spark - Second Edition covers how to write distributed programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API to developing analytics applications and tuning them for your purposes.
What You Will Learn
Install and set up Spark on your cluster
Prototype distributed applications with Spark's interactive shell
Learn different ways to interact with Spark's distributed representation of data (RDDs)
Query Spark with a SQL-like query syntax
Effectively test your distributed software
Recognize how Spark works with big data
Implement machine learning systems with highly scalable algorithms
Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you.