Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: hadoop cares about /etc/hosts ?


+
Olivier Renault 2013-09-09, 11:41
+
Jitendra Yadav 2013-09-09, 12:31
+
Jay Vyas 2013-09-09, 14:22
+
Jitendra Yadav 2013-09-09, 14:39
+
Shahab Yunus 2013-09-09, 14:26
+
Chris Embree 2013-09-10, 03:56
+
Vinayakumar B 2013-09-10, 15:13
Copy link to this message
-
Re: hadoop cares about /etc/hosts ?
So
for the first *wrong* /etc/hosts file, the sequence would be :
find hdfs://master:54310
find master -> 192.168.6.10 (*but it already got ip here*)
find 192.168.6.10 -> localhost
find localhost -> 127.0.0.1
The other thing, when 'ping master', i would got reply from '192.168.6.10'
instead of 127.0.0.1.
So it's not simply the Name Resolution on the os level. Or i'm totally
wrong?

On Tue, Sep 10, 2013 at 11:13 PM, Vinayakumar B
<[EMAIL PROTECTED]>wrote:

> Ensure that for each ip there is only one hostname configured in
> /etc/hosts file.
>
> If you configure multiple different hostnames for same ip then os will
> chose first one when finding hostname using ip. Similarly for ip using
> hostname.
>
> Regards,
> Vinayakumar B
> On Sep 10, 2013 9:27 AM, "Chris Embree" <[EMAIL PROTECTED]> wrote:
>
>> This sound entirely like an OS Level problem and is slightly outside of
>> the scope of this list, however.  I'd suggest you look at your
>> /etc/nsswitch.conf file and ensure that the hosts: line says
>> hosts: files dns
>>
>> This will ensure that names are resolved first by /etc/hosts, then by DNS.
>>
>> Please also ensure that all of your systems have the same configuration
>> and that your NN, JT, SNN, etc. are all using the correct/same hostname.
>>
>> This is basic Name Resolution, please do not confuse this with a Hadoop
>> issue. IMHO
>>
>>
>> On Mon, Sep 9, 2013 at 10:05 PM, Cipher Chen <[EMAIL PROTECTED]>wrote:
>>
>>> Sorry i didn't express it well.
>>>
>>> conf/masters:
>>> master
>>>
>>> conf/slaves:
>>> master
>>> slave
>>>
>>> The /etc/hosts file which caused the problem (start-dfs.sh failed):
>>> 127.0.0.1       localhost
>>> 192.168.6.10    localhost
>>> ###
>>>
>>> 192.168.6.10    tulip master
>>> 192.168.6.5     violet slave
>>>
>>> But when I commented the line appended with hash,
>>> 127.0.0.1       localhost
>>> #
>>> 192.168.6.10    localhost
>>> ###
>>>
>>> 192.168.6.10    tulip master
>>> 192.168.6.5     violet slave
>>>
>>> The namenode starts successfully.
>>> I can't figure out *why*.
>>> How does hadoop decide which host/hostname/ip to be the namenode?
>>>
>>> BTW: How could namenode care about conf/masters and conf/slaves,
>>> since it's the host who run start-dfs.sh would be the namenode.
>>> Namenode doesn't need to check those confs.
>>> Nodes listed in conf/masteres would be SecondaryNameNode, isn't it?
>>> I
>>>
>>>
>>> On Mon, Sep 9, 2013 at 10:39 PM, Jitendra Yadav <
>>> [EMAIL PROTECTED]> wrote:
>>>
>>>> Means your $HADOOP_HOME/conf/masters file content.
>>>>
>>>>
>>>> On Mon, Sep 9, 2013 at 7:52 PM, Jay Vyas <[EMAIL PROTECTED]> wrote:
>>>>
>>>>> Jitendra:  When you say " check your masters file content"  what are
>>>>> you referring to?
>>>>>
>>>>>
>>>>> On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav <
>>>>> [EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> Also can you please check your masters file content in hadoop conf
>>>>>> directory?
>>>>>>
>>>>>> Regards
>>>>>> JItendra
>>>>>>
>>>>>> On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault <
>>>>>> [EMAIL PROTECTED]> wrote:
>>>>>>
>>>>>>> Could you confirm that you put the hash in front of 192.168.6.10
>>>>>>> localhost
>>>>>>>
>>>>>>> It should look like
>>>>>>>
>>>>>>> # 192.168.6.10    localhost
>>>>>>>
>>>>>>> Thanks
>>>>>>> Olivier
>>>>>>>  On 9 Sep 2013 12:31, "Cipher Chen" <[EMAIL PROTECTED]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>>   Hi everyone,
>>>>>>>>   I have solved a configuration problem due to myself in hadoop
>>>>>>>> cluster mode.
>>>>>>>>
>>>>>>>> I have configuration as below:
>>>>>>>>
>>>>>>>>   <property>
>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>     <value>hdfs://master:54310</value>
>>>>>>>>   </property>
>>>>>>>>
>>>>>>>> a
>>>>>>>> nd the hosts file:
>>>>>>>>
>>>>>>>>
>>>>>>>> /etc/hosts:
>>>>>>>> 127.0.0.1       localhost
>>>>>>>>  192.168.6.10    localhost
>>>>>>>> ###
>>>>>>>>
>>>>>>>> 192.168.6.10    tulip master
>>>>>>>> 192.168.6.5     violet slave
>>>>>>>>
>>>>>>>> a
>

Cipher Chen
+
Jitendra Yadav 2013-09-11, 08:03
+
Cipher Chen 2013-09-12, 01:41