Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Re: hadoop cares about /etc/hosts ?


Copy link to this message
-
Re: hadoop cares about /etc/hosts ?
Jitendra Yadav 2013-09-09, 12:31
Also can you please check your masters file content in hadoop conf
directory?

Regards
JItendra
On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault <[EMAIL PROTECTED]>wrote:

> Could you confirm that you put the hash in front of 192.168.6.10
> localhost
>
> It should look like
>
> # 192.168.6.10    localhost
>
> Thanks
> Olivier
>  On 9 Sep 2013 12:31, "Cipher Chen" <[EMAIL PROTECTED]> wrote:
>
>>   Hi everyone,
>>   I have solved a configuration problem due to myself in hadoop cluster
>> mode.
>>
>> I have configuration as below:
>>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://master:54310</value>
>>   </property>
>>
>> a
>> nd the hosts file:
>>
>>
>> /etc/hosts:
>> 127.0.0.1       localhost
>>  192.168.6.10    localhost
>> ###
>>
>> 192.168.6.10    tulip master
>> 192.168.6.5     violet slave
>>
>> a
>> nd when i was trying to start-dfs.sh, namenode failed to start.
>>
>>
>> namenode log hinted that:
>> 13/09/09 17:09:02 INFO namenode.NameNode: Namenode up at: localhost/
>> 192.168.6.10:54310
>> ...
>> 13/09/09 17:09:10 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> 13/09/09 17:09:11 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 1 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> 13/09/09 17:09:12 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 2 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> 13/09/09 17:09:13 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 3 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> 13/09/09 17:09:14 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 4 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> 13/09/09 17:09:15 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 5 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> 13/09/09 17:09:16 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 6 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> 13/09/09 17:09:17 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 7 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> 13/09/09 17:09:18 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 8 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> 13/09/09 17:09:19 INFO ipc.Client: Retrying connect to server: localhost/
>> 127.0.0.1:54310. Already tried 9 time(s); retry policy is
>> RetryUpToMaximumCountWithF>
>> ...
>>
>> Now I know deleting the line "192.168.6.10    localhost  ###
>> "
>> would fix this.
>> But I still don't know
>>
>> why hadoop would resolve "master" to "localhost/127.0.0.1."
>>
>>
>> Seems http://blog.devving.com/why-does-hbase-care-about-etchosts/explains this,
>> I'm not quite sure.
>> Is there any
>>  other
>> explanation to this?
>>
>> Thanks.
>>
>>
>>  --
>> Cipher Chen
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.