Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - RE: Error while configuring HDFS fedration


Copy link to this message
-
RE: Error while configuring HDFS fedration
Manickam P 2013-09-23, 15:49
Hi,

Thanks for your inputs. I fixed the issue.
Thanks,
Manickam P

From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: RE: Error while configuring HDFS fedration
Date: Mon, 23 Sep 2013 14:05:47 +0000

Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes,
 and/or a format is needed on DFS.
 
I suggest the following:
 
1.      
sbin/stop-dfs.sh && sbin/stop-yarn.sh
2.      
sudo killall java
(on all nodes)
3.      
sudo chmod –R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs
(on all nodes)
4.      
sudo rm –rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/*
(on all nodes)
5.      
bin/hdfs namenode –format –force

6.      
sbin/start-dfs.sh && sbin/start-yarn.sh
 
Then see if you get that error again.
 
From: Manickam P [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 23, 2013 4:44 PM

To: [EMAIL PROTECTED]

Subject: Error while configuring HDFS fedration
 

Guys,

I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.
I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.
In node one i'm getting this error.

java.net.BindException: Port in use: lab-hadoop.eng.com:50070

In another node i'm getting this error.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

My core-site xml has the below.

<configuration>

  <property>

    <name>fs.default.name</name>

    <value>hdfs://10.101.89.68:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>

  </property>

</configuration>

My hdfs-site xml has the below.

<configuration>

   <property>

     <name>dfs.replication</name>

     <value>2</value>

   </property>

   <property>

     <name>dfs.permissions</name>

     <value>false</value>

   </property>

   <property>

        <name>dfs.federation.nameservices</name>

        <value>ns1,ns2</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns1</name>

        <value>10.101.89.68:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns1</name>

    <value>10.101.89.68:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns1</name>

        <value>10.101.89.68:50090</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns2</name>

        <value>10.101.89.69:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns2</name>

    <value>10.101.89.69:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns2</name>

        <value>10.101.89.69:50090</value>

    </property>

 </configuration>

Please help me to fix this error.

Thanks,

Manickam P