-RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
If you are moving from NonHA (single master) to HA, then follow the below
1. Configure the another namenode's configuration in the running
namenode and all datanode's configurations. And configure logical
2. Configure the shared storage related configuration.
3. Stop the running NameNode and all datanodes.
4. Execute 'hdfs namenode -initializeSharedEdits' from the existing
namenode installation, to transfer the edits to shared storage.
5. Now format zkfc using 'hdfs zkfc -formatZK' and start zkfc using
'hadoop-daemon.sh start zkfc'
6. Now restart the namenode from existing installation. If all
configurations are fine, then NameNode should start successfully as STANDBY,
then zkfc will make it to ACTIVE.
7. Now install the NameNode in another machine (master2) with same
configuration, except 'dfs.ha.namenode.id'.
8. Now instead of format, you need to copy the name dir contents from
another namenode (master1) to master2's name dir. For this you are having 2
a. Execute 'hdfs namenode -bootStrapStandby' from the master2
b. Using 'scp' copy entire contents of name dir from master1 to
master2's name dir.
9. Now start the zkfc for second namenode ( No need to do zkfc format
now). Also start the namenode (master2)
From: Uma Maheswara Rao G [mailto:[EMAIL PROTECTED]]
Sent: Friday, November 16, 2012 1:26 PM
To: [EMAIL PROTECTED]
Subject: RE: High Availability - second namenode (master2) issue:
If you format namenode, you need to cleanup storage directories of DataNode
as well if that is having some data already. DN also will have namespace ID
saved and compared with NN namespaceID. if you format NN, then namespaceID
will be changed and DN may have still older namespaceID. So, just cleaning
the data in DN would be fine.
From: hadoop hive [[EMAIL PROTECTED]]
Sent: Friday, November 16, 2012 1:15 PM
To: [EMAIL PROTECTED]
Subject: Re: High Availability - second namenode (master2) issue:
Seems like you havn't format your cluster (if its 1st time made).
On Fri, Nov 16, 2012 at 9:58 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
I have installed a Hadoop Cluster with a single master (master1) and have
HBase running on the HDFS. Now I am setting up the second master (master2)
in order to form HA. When I used JPS to check the cluster, I found :
i.e. The datanode on this server could not be started
In the log file, found:
2012-11-16 10:28:44,851 ERROR
Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
= 1356148070; datanode namespaceID = 1151604993
One of the possible solutions to fix this issue is to: stop the cluster,
reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the
NameNode, do I need to reinstall the entire HBASE? I don't mind to have all
data lost as I don't have many data in HBASE and HDFS, however I don't want
to re-install HBASE again.
On the other hand, I have tried another solution: stop the DataNode, edit
the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.security.AccessController.doPrivileged(Native Method)
Could not find the main class: master2. Program will exit.
QUESTION: Any other solutions?