Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Second node hdfs


+
Cyril Bogus 2013-03-13, 14:27
+
Mohammad Tariq 2013-03-13, 14:34
+
Cyril Bogus 2013-03-13, 15:43
+
Mohammad Tariq 2013-03-13, 15:47
Copy link to this message
-
Re: Second node hdfs
you dont need to format each time you add a new node.

The previous node addition failed because the node you added had already
served a hadoop cluster and when you add it to a new one it showed
mismatch.
For a new node to add, just make sure that your datanode is clean (with the
data directory being newly created)
On Wed, Mar 13, 2013 at 9:13 PM, Cyril Bogus <[EMAIL PROTECTED]> wrote:

> Thank you both.
>
> So what both of you were saying which will be true is that is order to
> start and synchronize the cluster, I will have to format both nodes at the
> same time ok.
>
> I was working on the master node without the second node and did not
> format before trying to start the second one.
>
> I reformatted the cluster with both nodes connected and it worked. But I
> have a question.
>
> If I want to add a third node and my current cluster is populated with
> some tables, will I have to format it again in order to add the node?
>
>
> On Wed, Mar 13, 2013 at 10:34 AM, Mohammad Tariq <[EMAIL PROTECTED]>wrote:
>
>> Hello Cyril,
>>
>>       This is because your datanode has a different namespaceID from the
>> one which master(namenode) actually has. Have you formatted the HDFS
>> recently? Were you able to format it properly? Everytime you format HDFS,
>> NameNode generates new namespaceID, which should be same in both NameNodes
>> and DataNodes otherwise it won't be able to reach NameNode.
>>
>> Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>> cloudfront.blogspot.com
>>
>>
>> On Wed, Mar 13, 2013 at 7:57 PM, Cyril Bogus <[EMAIL PROTECTED]>wrote:
>>
>>> I am trying to start the datanode on the slave node but when I check the
>>> dfs I only have one node.
>>>
>>> When I check the logs on the slave node I find the following output.
>>>
>>> 2013-03-13 10:22:14,608 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting DataNode
>>> STARTUP_MSG:   host = Owner-5/127.0.1.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 1.0.4
>>> STARTUP_MSG:   build >>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>> ************************************************************/
>>> 2013-03-13 10:22:15,086 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> hadoop-metrics2.properties
>>> 2013-03-13 10:22:15,121 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> MetricsSystem,sub=Stats registered.
>>> 2013-03-13 10:22:15,123 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period at 10 second(s).
>>> 2013-03-13 10:22:15,123 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
>>> started
>>> 2013-03-13 10:22:15,662 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>> registered.
>>> 2013-03-13 10:22:15,686 WARN
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
>>> exists!
>>> 2013-03-13 10:22:19,730 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /home/hadoop/hdfs/data: namenode namespaceID >>> 1683708441; datanode namespaceID = 606666501
>>>     at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>>     at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>>     at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>>     at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>>     at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>>     at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>>     at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
Nitin Pawar
+
Cyril Bogus 2013-03-13, 15:46
+
Nitin Pawar 2013-03-13, 14:33
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB