You're a software developer somewhat familiar with Apache Spark and how it's used to analyze Big Data. You've been tasked with a Big Data analysis job and you want to rent space on a cluster to do it. But where to begin?
This is a hands-on course where Amazon Web Services pro Frank Kane shows you how to rent Amazon's Elastic MapReduce service (EMR) at minimal cost and use it to run Spark scripts on top of a real Hadoop cluster. Kane's approach is fun: You'll learn a Big Data analysis process by actually deploying Spark on EMR to build a working movie recommendation engine using real movie ratings data.
- Learn Amazon EMR's undocumented "gotchas", so they don't take you by surprise
- Save money on EMR costs by learning to stage scripts, data, and actions ahead of time
- Understand how to provision an EMR cluster configured for Apache Spark
- Explore two different ways to run Spark scripts on EMR
- Learn how to set up security, and monitor a Spark cluster through a web UI
- Understand how to interactively develop Spark code on EMR with Apache Zeppelin
- Gain experience with Spark and AWS - two skills that are highly valued by employers
Frank Kane spent 9 years at Amazon and IMDb developing and managing the technology that delivers product recommendations to hundreds of millions of customers. Frank holds 17 patents in the fields of distributed computing, data mining, and machine learning. He now runs Sundog Software, a software company focused on virtual reality technology and on Big Data analysis training. He is the author of multiple titles on Spark, MapReduce, Spark Streaming, and Python.