Decommissioning DataNodes

There can be multiple situations where you want to decommission one or more DataNodes from an HDFS cluster. This recipe shows how to gracefully decommission DataNodes without incurring data loss.

How to do it...

The following steps show you how to decommission DataNodes gracefully:

  1. If your cluster doesn't have it, add an exclude file to the cluster. Create an empty file in the NameNode and point to it from the $HADOOP_HOME/etc/hadoop/hdfs-site.xml file by adding the following property. Restart the NameNode:
    <property> <name>dfs.hosts.exclude</name> <value>FULL_PATH_TO_THE_EXCLUDE_FILE</value> <description>Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the ...

Get Hadoop MapReduce v2 Cookbook - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.