Have you reformatted the NN(unsuccessfully)?Was your NN serving some
other cluster earlier or your DNs were part of some other cluster?Datanodes
bind themselves to namenode through namespaceID and in your case the IDs of
DNs and NN seem to be different. As a workaround you could do this :
1- Stop all the daemons.
2- Go to the directory which you have specified as the value of "dfs.name.dir"
property in your hdfs-site.xml file.
3- You'll find a directory called "current" inside this directory where a
file named "VERSION" will be present. Open this file and copy the value of "
namespaceID" form here.
4- Now go to the directory which you have specified as the value of
"dfs.data.dir" property in your hdfs-site.xml file.
5- Move inside the "current" directory and open the "VERSION" file here as
well. Now replace the value of "namespaceID" present here with the one you
had copied earlier.
6- Restart all the daemons.
Note : If you have not created dfs.name.dir and dfs.data.dir separately,
you could find all this inside your temp directory.
On Thu, May 2, 2013 at 12:13 AM, shashwat shriparv <
[EMAIL PROTECTED]> wrote:
> Format your namenode and start again
> *Thanks & Regards *
> Shashwat Shriparv
> On Wed, May 1, 2013 at 8:40 PM, 姚吉龙 <[EMAIL PROTECTED]> wrote:
>> Id is different in namenode and data node.you can modify the id.I met the
>> same issue and I completely remove all file under hadoop
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>> On Wed, May 1, 2013 at 8:32 PM, Mohsen B.Sarmadi <
>> [EMAIL PROTECTED]> wrote:
>>> Dear Sirs/madams
>>> i am trying to run hadoop 1.0.4 in the pseudo distributed mode, but i am
>>> facing with
>>> datanode log,
>>> 01/05/2013 13:16:54 2013-05-01 13:16:54,206 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /home/mohs/hadoop/dfsdirdata: namenode
>>> namespaceID = 717658700; datanode namespaceID = 1318489331
>>> job-tracker log:
>>> 01/05/2013 13:24:40 org.apache.hadoop.ipc.RemoteException:
>>> java.io.IOException: File /home/mohs/hadoop/tmp/mapred/system/
>>> jobtracker.info could only be replicated to 0 nodes, instead of 1
>>> data-node log:
>>> 01/05/2013 13:26:10 2013-05-01 13:26:09,711 WARN
>>> org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>>> /home/mohs/hadoop/tmp/mapred/system/jobtracker.info could only be
>>> replicated to 0 nodes, instead of 1
>>> i have tried to solve this by removing the files in /tmp/hadoop/* and
>>> running hadoop namenode -format
>>> but i am facing with the same error.
>>> do you have any solution for this?