Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: hadoop cares about /etc/hosts ?


Copy link to this message
-
Re: hadoop cares about /etc/hosts ?
Hi, all
  Thanks for all your replies and guidance.

  Although I haven't figured out why. :)
On Wed, Sep 11, 2013 at 4:03 PM, Jitendra Yadav
<[EMAIL PROTECTED]>wrote:

> Hi,
>
> So what you were expecting while pinging master?
>
> As per my understanding it is working fine.Well there is no sense of using
> localhost and hostname on same ip, for localhost it's always preferred to
> use loopback method i.e 127.0.0.1
>
> Hope this will help you.
>
> Regards
> Jitendra
> On Wed, Sep 11, 2013 at 7:05 AM, Cipher Chen <[EMAIL PROTECTED]>wrote:
>
>>  So
>> for the first *wrong* /etc/hosts file, the sequence would be :
>> find hdfs://master:54310
>> find master -> 192.168.6.10 (*but it already got ip here*)
>> find 192.168.6.10 -> localhost
>> find localhost -> 127.0.0.1
>>
>>
>> The other thing, when 'ping master', i would got reply from
>> '192.168.6.10' instead of 127.0.0.1.
>> So it's not simply the Name Resolution on the os level. Or i'm totally
>> wrong?
>>
>>
>>
>> On Tue, Sep 10, 2013 at 11:13 PM, Vinayakumar B <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Ensure that for each ip there is only one hostname configured in
>>> /etc/hosts file.
>>>
>>> If you configure multiple different hostnames for same ip then os will
>>> chose first one when finding hostname using ip. Similarly for ip using
>>> hostname.
>>>
>>> Regards,
>>> Vinayakumar B
>>>  On Sep 10, 2013 9:27 AM, "Chris Embree" <[EMAIL PROTECTED]> wrote:
>>>
>>>> This sound entirely like an OS Level problem and is slightly outside of
>>>> the scope of this list, however.  I'd suggest you look at your
>>>> /etc/nsswitch.conf file and ensure that the hosts: line says
>>>> hosts: files dns
>>>>
>>>> This will ensure that names are resolved first by /etc/hosts, then by
>>>> DNS.
>>>>
>>>> Please also ensure that all of your systems have the same configuration
>>>> and that your NN, JT, SNN, etc. are all using the correct/same hostname.
>>>>
>>>> This is basic Name Resolution, please do not confuse this with a Hadoop
>>>> issue. IMHO
>>>>
>>>>
>>>> On Mon, Sep 9, 2013 at 10:05 PM, Cipher Chen <[EMAIL PROTECTED]
>>>> > wrote:
>>>>
>>>>>  Sorry i didn't express it well.
>>>>>
>>>>> conf/masters:
>>>>> master
>>>>>
>>>>> conf/slaves:
>>>>> master
>>>>> slave
>>>>>
>>>>> The /etc/hosts file which caused the problem (start-dfs.sh failed):
>>>>>  127.0.0.1       localhost
>>>>>  192.168.6.10    localhost
>>>>> ###
>>>>>
>>>>> 192.168.6.10    tulip master
>>>>> 192.168.6.5     violet slave
>>>>>
>>>>> But when I commented the line appended with hash,
>>>>> 127.0.0.1       localhost
>>>>> #
>>>>> 192.168.6.10    localhost
>>>>> ###
>>>>>
>>>>> 192.168.6.10    tulip master
>>>>> 192.168.6.5     violet slave
>>>>>
>>>>> The namenode starts successfully.
>>>>> I can't figure out *why*.
>>>>> How does hadoop decide which host/hostname/ip to be the namenode?
>>>>>
>>>>> BTW: How could namenode care about conf/masters and conf/slaves,
>>>>> since it's the host who run start-dfs.sh would be the namenode.
>>>>> Namenode doesn't need to check those confs.
>>>>> Nodes listed in conf/masteres would be SecondaryNameNode, isn't it?
>>>>> I
>>>>>
>>>>>
>>>>> On Mon, Sep 9, 2013 at 10:39 PM, Jitendra Yadav <
>>>>> [EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> Means your $HADOOP_HOME/conf/masters file content.
>>>>>>
>>>>>>
>>>>>> On Mon, Sep 9, 2013 at 7:52 PM, Jay Vyas <[EMAIL PROTECTED]>wrote:
>>>>>>
>>>>>>> Jitendra:  When you say " check your masters file content"  what are
>>>>>>> you referring to?
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav <
>>>>>>> [EMAIL PROTECTED]> wrote:
>>>>>>>
>>>>>>>> Also can you please check your masters file content in hadoop conf
>>>>>>>> directory?
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> JItendra
>>>>>>>>
>>>>>>>> On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault <
>>>>>>>> [EMAIL PROTECTED]> wrote:
>>>>>>>>
>>>>>>>>> Could you confirm that you put the hash in front of
>>>>>>>>> 192.168.6.10    localhost
Cipher Chen