Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - Need Help on Hadoop cluster Setup


+
Munnavar Sk 2013-03-22, 12:58
+
MShaik 2013-03-22, 14:34
Copy link to this message
-
Re: Need Help on Hadoop cluster Setup
Mohammad Tariq 2013-03-22, 14:57
have you reformatted the hdfs?if that is the case it was, i think, not
proper.
were the nodes which you attached serving some other cluster earlier?your
logs show that you are facing problems because of mismatch in the IDs of
the NN and the IDs which DNs have. to overcome this problem you can
follow these steps :

1 - Stop all teh DNs.
2 - Go to the directory which is serving as your dfs.data.dir. Inside this
directiry
you'll find a subdirectory ". there will be a file named as "VERSION"  in
this
directory. in this file you can see the namespaceID(probably the second
line).
change it to match the namespaceID which is there in
"dfs.name.dir/current/VERSION"
file.
3 - restart the processes.

HTH
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Fri, Mar 22, 2013 at 8:04 PM, MShaik <[EMAIL PROTECTED]> wrote:

>  Hi,
>
>  DataNode is not started on all the nodes, as tasktracker is started on
> all the nodes.
>
>  please find the below datanode log, please let me know the solution.
>
>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
> 2013-03-22 19:52:49,162 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
> = 2050588793; datanode namespaceID = 503772406
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>
>  2013-03-22 19:52:49,168 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
> ************************************************************/
>
>
> Thank's
>
> -----Original Message-----
> From: Mohammad Tariq <[EMAIL PROTECTED]>
> To: user <[EMAIL PROTECTED]>
> Sent: Fri, Mar 22, 2013 7:07 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  Hello Munavvar,
>
>        It depends on your configuration where your DNs and TTs will run.
> If you have configured all your slaves to run both the processes then they
> should. If they are not running then there is definitely some problem.
> Could you please check your DN logs once and see if you find anything
> unusual there. And you have to copy the files across all the machines.
>
>  You can do one more thing just to cross check. Point your web browser to
> the HDFS web UI(master_machine:9000) to do that.
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
+
Mohammad Tariq 2013-03-22, 14:58
+
MShaik 2013-03-22, 15:18
+
Mohammad Tariq 2013-03-22, 15:45