Part II. Workflows and Tools for Big Data Science

The second part of Data Analytics with Hadoop explores higher-level tools and workflows for practicing data scientists. Although a foundational knowledge of Hadoop, MapReduce, and Spark is required to understand what kind of analyses can be conducted at scale, the day-to-day efforts of a data scientist engaging in big data will generally revolve around the ecosystem of tools built on top of Hadoop. Generally speaking, we have organized these final chapters around the data product pipeline presented in Chapter 1.

Chapter 6 discusses data warehousing and data mining and introduces Hive and HBase for both relational and columnar data storage and queries. Chapter 7 identifies the need for ingestion utilities to get data into HDFS and looks at structured methods using Sqoop, as well as less structured ingestion using Flume. Chapter 8 explores higher-level APIs for analytics: Apache Pig and Spark DataFrames. Chapter 9 discusses machine learning and computational methods using Spark MLlib. Finally, Chapter 10 wraps things up and takes a complete view of doing data science by summarizing the integration of the workflows discussed in the previous chapters of this part.

Get Data Analytics with Hadoop now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.