Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Need Help on Hadoop cluster Setup


+
Munnavar Sk 2013-03-22, 12:58
+
MShaik 2013-03-22, 14:34
+
Mohammad Tariq 2013-03-22, 14:57
+
Mohammad Tariq 2013-03-22, 14:58
+
MShaik 2013-03-22, 15:18
Copy link to this message
-
Re: Need Help on Hadoop cluster Setup
you are welcome.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Fri, Mar 22, 2013 at 8:48 PM, MShaik <[EMAIL PROTECTED]> wrote:

>
> Thank you, Tariq.
> After chang the namesapceID on datanodes, all datanodes are started.
>
>  Thank you once again...!
>
> -----Original Message-----
> From: Mohammad Tariq <[EMAIL PROTECTED]>
> To: user <[EMAIL PROTECTED]>
> Sent: Fri, Mar 22, 2013 8:29 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  sorry for the typo in the second line of the 2nd point. the path will be
> "/dfs.data.dir/current/VERSION".
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
>  cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <[EMAIL PROTECTED]>wrote:
>
>> have you reformatted the hdfs?if that is the case it was, i think, not
>> proper.
>> were the nodes which you attached serving some other cluster earlier?your
>> logs show that you are facing problems because of mismatch in the IDs of
>> the NN and the IDs which DNs have. to overcome this problem you can
>> follow these steps :
>>
>>  1 - Stop all teh DNs.
>> 2 - Go to the directory which is serving as your dfs.data.dir. Inside
>> this directiry
>> you'll find a subdirectory ". there will be a file named as "VERSION"  in
>> this
>> directory. in this file you can see the namespaceID(probably the second
>> line).
>>  change it to match the namespaceID which is there in
>> "dfs.name.dir/current/VERSION"
>> file.
>> 3 - restart the processes.
>>
>>  HTH
>>
>>
>>  Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>>  cloudfront.blogspot.com
>>
>>
>>   On Fri, Mar 22, 2013 at 8:04 PM, MShaik <[EMAIL PROTECTED]> wrote:
>>
>>>  Hi,
>>>
>>>  DataNode is not started on all the nodes, as tasktracker is started on
>>> all the nodes.
>>>
>>>  please find the below datanode log, please let me know the solution.
>>>
>>>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
>>> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
>>> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0
>>> time(s).
>>> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1
>>> time(s).
>>> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2
>>> time(s).
>>> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3
>>> time(s).
>>> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4
>>> time(s).
>>> 2013-03-22 19:52:49,162 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
>>> = 2050588793; datanode namespaceID = 503772406
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>>
>>>  2013-03-22 19:52:49,168 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>> /************************************************************