Pages

Sunday, March 15, 2015

java.io.IOException: Incompatible clusterIDs in /home/user/hadoop/data

I encountered this issue when I added a new data node later for an already created Hadoop cluster.

Problem:
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to master/33.33.33.10:9000. Exiting.
java.io.IOException: Incompatible clusterIDs in /home/huser/hadoop/data: namenode clusterID = CID-8019e6e9-73d7-409c-a241-b57e9534e6fe; datanode clusterID = CID-bcc9c537-54dc-4329-bf63-448037976f75
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)


Solution:
The issue seems to be due to some version/ metadata mismatch problem.  I followed the steps given below to solve the issue:
  1. Delete the directory listed as dfs.datanode.data.dir/ dfs.namenode.name.dir configuration in hdfs-site.xml
  2. Delete tmp/hadoop-hduser directory
  3. Re-format the name node using following command
./hdfs namenode -format




No comments:

Post a Comment