Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Problem with Hadoop and /etc/hosts file


+
Alberto Cordioli 2012-09-14, 13:18
+
Shumin Wu 2012-09-14, 16:19
+
Alberto Cordioli 2012-09-16, 01:04
Copy link to this message
-
Re: Problem with Hadoop and /etc/hosts file
Just a hunch, w DNS, do you have your rDNS (reverse DNS lookup) set up correctly.

Sent from a remote device. Please excuse any typos...

Mike Segel

On Sep 15, 2012, at 8:04 PM, Alberto Cordioli <[EMAIL PROTECTED]> wrote:

> This is the configuration I used till now...It works, but give the
> mentioned error (although the procedure seems to return correct
> results anyway.
> I think in /etc/hosts should be also the line
> 127.0.0.1 hostname
>
> but in that case Hadoop does not start.
>
> Alberto
>
> On 14 September 2012 18:19, Shumin Wu <[EMAIL PROTECTED]> wrote:
>> Would that work for you?
>>
>> 127.0.0.1        localhost
>> 10.220.55.41  hostname
>>
>> -Shumin
>>
>> On Fri, Sep 14, 2012 at 6:18 AM, Alberto Cordioli <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Hi,
>>>
>>> I've successfully installed Apache HBase on a cluster with Hadoop.
>>> It works fine, but when I try to use Pig to load some data from an
>>> HBase table I get this error:
>>>
>>> ERROR org.apache.hadoop.hbase.mapreduce.TableInputFormatBase - Cannot
>>> resolve the host name for /10.220.55.41 because of
>>> javax.naming.OperationNotSupportedException: DNS service refused
>>> [response code 5]; remaining name '41.55.220.10.in-addr.arpa'
>>>
>>> Pig returns in any case the correct results (actually I don't know
>>> how), but I'd like to solve this issue.
>>>
>>> I discovered that this error is due to a mistake in /etc/hosts
>>> configuration file. In fact, as reported in the documentation, I
>>> should add the line
>>> 127.0.0.1    hostname
>>> (http://hbase.apache.org/book.html#os).
>>>
>>> But if I add this entry my Hadoop cluster does not start since the
>>> datanote is bind to the local address instead to the hostname/IP
>>> address. For this reason in many tutorial it's suggested to remove
>>> such entry (e.g.
>>>
>>> http://stackoverflow.com/questions/8872807/hadoop-datanodes-cannot-find-namenode
>>> ).
>>>
>>> Basically if I add that line Hadoop won't work, but if I keep the file
>>> without the loopback address I get the above error.
>>> What can I do? Which is the right configuration?
>>>
>>>
>>> Thanks,
>>> Alberto
>>>
>>>
>>>
>>>
>>> --
>>> Alberto Cordioli
>>>
>
>
>
> --
> Alberto Cordioli
>
+
shashwat shriparv 2012-09-17, 05:47
+
Alberto Cordioli 2012-09-17, 07:39
+
shashwat shriparv 2012-09-17, 09:54
+
Alberto Cordioli 2012-09-17, 10:09
+
Stack 2012-09-17, 16:17
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB