Batch processing / historical analysis

Now, let's turn our attention to the batch processing mechanism. For this, we will use Hadoop. Although a complete description of Hadoop is well beyond the scope of this section, we will give a brief overview of Hadoop alongside the Druid-specific setup.

Hadoop provides two major components: a distributed file system and a distributed processing framework. The distributed file system is aptly named the Hadoop Distributed Filesystem (HDFS). The distributed processing framework is known as MapReduce. Since we chose to leverage Cassandra as our storage mechanism in the hypothetical system architecture, we will not need HDFS. We will, however, use the MapReduce portion of Hadoop to distribute the processing across ...

Get Storm Blueprints: Patterns for Distributed Real-time Computation now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.