Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Datanode doesn't connect to Namenode


Copy link to this message
-
Re: Datanode doesn't connect to Namenode
if you have removed this property from the slave machines then your DN
information will be created under /tmp folder and once you reboot your data
node machines, the information will be lost..

Sorry i have not seen the logs..but you dont have play around the
properties..
...see datanode will not come up in scenario, where it is not able to send
the heart beat signal to the name node at port 54310
Do step by step :

Check whether you can ping every machine and you can do SSH in password
less manner

Lets say i have one master machine whose hostname is *Master* and i have
two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
CentOS)

In *master Machine* do the following things:

*First disable the firewall by running the following command:
*as a root user run the following commands
service iptables save
service iptables stop
chkconfig iptables off

specify the following properties in the corresponding files

*mapred-site.xml*

   - mapred.job.tracker (Master:54311)

*core-site.xml*

   - fs.default.name (hdfs://Master:54310)
   - hadoop.tmp.dir (choose some persistent directory)

*hdfs-site.xml*

   - dfs.replication (3)
   - dfs.block.size(64MB)

*Masters file*

   - Master

*Slaves file*

   - Slave0
   - Slave1

*hadoop-env.sh*

   - export JAVA_HOME=<Your java home directory>
In *slave0 machine
*

   - Disable the firewall
   - Same properties as you did in Masters machine

In *slave 1 machine*

   - Disable the firewall
   - Same properties as you did in Master machine
Once you start the cluster by running the command start-all.sh, check the
ports 54310 and 54311 got opened by running the command "netstat
-tuplen"..it will show whether ports are opened or not

Regards,
Som Shekhar Sharma
+91-8197243810
On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
[EMAIL PROTECTED]> wrote:

> Thanks,
> at all files I changed to master (cloud6) and I take off this property
> <name>hadoop.tmp.dir</name>.
>
> Felipe
>
>
> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <[EMAIL PROTECTED]>wrote:
>
>> Disable the firewall on data node and namenode machines..
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Your hdfs name entry should be same on master and databnodes
>>>
>>> * <name>fs.default.name</name>*
>>> *<value>hdfs://cloud6:54310</value>*
>>>
>>> Thanks
>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>> [EMAIL PROTECTED]> wrote:
>>>
>>>> on my slave the process is running:
>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>> 19025 DataNode
>>>> 19092 Jps
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>> [EMAIL PROTECTED]> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Your logs showing that the process is creating IPC call not for
>>>>> namenode, it is hitting datanode itself.
>>>>>
>>>>> Check you please check you datanode processes status?.
>>>>>
>>>>> Regards
>>>>> Jitendra
>>>>>
>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>> [EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>> connect to the master (cloud6).
>>>>>>
>>>>>>  2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>>>> sleepTime=1 SECONDS)
>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>
>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>>  <configuration>
>>>>>> <property>
>>>>>>   <name>hadoop.tmp.dir</name>
>>>>>>   <value>/app/hadoop/tmp</value>
>>>>>>   <description>A base for other temporary directories.</description>
>>