Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Re: hadoop cares about /etc/hosts ?


Copy link to this message
-
Re: hadoop cares about /etc/hosts ?
Vinayakumar B 2013-09-10, 15:13
Ensure that for each ip there is only one hostname configured in /etc/hosts
file.

If you configure multiple different hostnames for same ip then os will
chose first one when finding hostname using ip. Similarly for ip using
hostname.

Regards,
Vinayakumar B
On Sep 10, 2013 9:27 AM, "Chris Embree" <[EMAIL PROTECTED]> wrote:

> This sound entirely like an OS Level problem and is slightly outside of
> the scope of this list, however.  I'd suggest you look at your
> /etc/nsswitch.conf file and ensure that the hosts: line says
> hosts: files dns
>
> This will ensure that names are resolved first by /etc/hosts, then by DNS.
>
> Please also ensure that all of your systems have the same configuration
> and that your NN, JT, SNN, etc. are all using the correct/same hostname.
>
> This is basic Name Resolution, please do not confuse this with a Hadoop
> issue. IMHO
>
>
> On Mon, Sep 9, 2013 at 10:05 PM, Cipher Chen <[EMAIL PROTECTED]>wrote:
>
>> Sorry i didn't express it well.
>>
>> conf/masters:
>> master
>>
>> conf/slaves:
>> master
>> slave
>>
>> The /etc/hosts file which caused the problem (start-dfs.sh failed):
>> 127.0.0.1       localhost
>> 192.168.6.10    localhost
>> ###
>>
>> 192.168.6.10    tulip master
>> 192.168.6.5     violet slave
>>
>> But when I commented the line appended with hash,
>> 127.0.0.1       localhost
>> #
>> 192.168.6.10    localhost
>> ###
>>
>> 192.168.6.10    tulip master
>> 192.168.6.5     violet slave
>>
>> The namenode starts successfully.
>> I can't figure out *why*.
>> How does hadoop decide which host/hostname/ip to be the namenode?
>>
>> BTW: How could namenode care about conf/masters and conf/slaves,
>> since it's the host who run start-dfs.sh would be the namenode.
>> Namenode doesn't need to check those confs.
>> Nodes listed in conf/masteres would be SecondaryNameNode, isn't it?
>> I
>>
>>
>> On Mon, Sep 9, 2013 at 10:39 PM, Jitendra Yadav <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Means your $HADOOP_HOME/conf/masters file content.
>>>
>>>
>>> On Mon, Sep 9, 2013 at 7:52 PM, Jay Vyas <[EMAIL PROTECTED]> wrote:
>>>
>>>> Jitendra:  When you say " check your masters file content"  what are
>>>> you referring to?
>>>>
>>>>
>>>> On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav <
>>>> [EMAIL PROTECTED]> wrote:
>>>>
>>>>> Also can you please check your masters file content in hadoop conf
>>>>> directory?
>>>>>
>>>>> Regards
>>>>> JItendra
>>>>>
>>>>> On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault <
>>>>> [EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> Could you confirm that you put the hash in front of 192.168.6.10
>>>>>> localhost
>>>>>>
>>>>>> It should look like
>>>>>>
>>>>>> # 192.168.6.10    localhost
>>>>>>
>>>>>> Thanks
>>>>>> Olivier
>>>>>>  On 9 Sep 2013 12:31, "Cipher Chen" <[EMAIL PROTECTED]>
>>>>>> wrote:
>>>>>>
>>>>>>>   Hi everyone,
>>>>>>>   I have solved a configuration problem due to myself in hadoop
>>>>>>> cluster mode.
>>>>>>>
>>>>>>> I have configuration as below:
>>>>>>>
>>>>>>>   <property>
>>>>>>>     <name>fs.default.name</name>
>>>>>>>     <value>hdfs://master:54310</value>
>>>>>>>   </property>
>>>>>>>
>>>>>>> a
>>>>>>> nd the hosts file:
>>>>>>>
>>>>>>>
>>>>>>> /etc/hosts:
>>>>>>> 127.0.0.1       localhost
>>>>>>>  192.168.6.10    localhost
>>>>>>> ###
>>>>>>>
>>>>>>> 192.168.6.10    tulip master
>>>>>>> 192.168.6.5     violet slave
>>>>>>>
>>>>>>> a
>>>>>>> nd when i was trying to start-dfs.sh, namenode failed to start.
>>>>>>>
>>>>>>>
>>>>>>> namenode log hinted that:
>>>>>>> 13/09/09 17:09:02 INFO namenode.NameNode: Namenode up at: localhost/
>>>>>>> 192.168.6.10:54310
>>>>>>> ...
>>>>>>> 13/09/09 17:09:10 INFO ipc.Client: Retrying connect to server:
>>>>>>> localhost/127.0.0.1:54310. Already tried 0 time(s); retry policy is
>>>>>>> RetryUpToMaximumCountWithF>
>>>>>>> 13/09/09 17:09:11 INFO ipc.Client: Retrying connect to server:
>>>>>>> localhost/127.0.0.1:54310. Already tried 1 time(s); retry policy is
>>>>>>> RetryUpToMaximumCountWithF>
>>