Redundancy

When we think of redundant Hadoop clusters, we should think about how much redundancy we can keep. As we already know, the Hadoop Distributed File System (HDFS) has internal data redundancy built in to it.

Given that a Hadoop cluster has lot of ecosystem built around it (services such as YARN, Kafka, and so on), we should think and plan carefully about whether to have the entire ecosystem made redundant or make only the data redundant by keeping it in a different cluster.

It's easier to make the HDFS portion of the Hadoop redundant as there are tools to copy the data from one HDFS to another HDFS.

Let's take a look at possible ways to achieve this via this diagram:

As we can see here, the main Hadoop cluster runs a full stack ...

Get Modern Big Data Processing with Hadoop now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.