Handling the "too many open files" error

In this recipe, we will describe how to troubleshoot the error shown in the following DataNode logs:

2012-02-18 17:43:18,009 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.166.111.191:50010, storageID=DS-2072496811-10.168.130.82-50010-1321345166369, infoPort=50075, ipcPort=50020):DataXceiver
java.io.FileNotFoundException: /usr/local/hadoop/var/dfs/data/current/subdir6/blk_-8839555124496884481 (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockInputStream(FSDataset.java:1068)

Getting ready

To fix this issue, you will need root privileges ...

Get HBase Administration Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.