Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Problem with Hadoop and /etc/hosts file


Copy link to this message
-
Re: Problem with Hadoop and /etc/hosts file
Would that work for you?

127.0.0.1        localhost
10.220.55.41  hostname

-Shumin

On Fri, Sep 14, 2012 at 6:18 AM, Alberto Cordioli <
[EMAIL PROTECTED]> wrote:

> Hi,
>
> I've successfully installed Apache HBase on a cluster with Hadoop.
> It works fine, but when I try to use Pig to load some data from an
> HBase table I get this error:
>
> ERROR org.apache.hadoop.hbase.mapreduce.TableInputFormatBase - Cannot
> resolve the host name for /10.220.55.41 because of
> javax.naming.OperationNotSupportedException: DNS service refused
> [response code 5]; remaining name '41.55.220.10.in-addr.arpa'
>
> Pig returns in any case the correct results (actually I don't know
> how), but I'd like to solve this issue.
>
> I discovered that this error is due to a mistake in /etc/hosts
> configuration file. In fact, as reported in the documentation, I
> should add the line
> 127.0.0.1    hostname
> (http://hbase.apache.org/book.html#os).
>
> But if I add this entry my Hadoop cluster does not start since the
> datanote is bind to the local address instead to the hostname/IP
> address. For this reason in many tutorial it's suggested to remove
> such entry (e.g.
>
> http://stackoverflow.com/questions/8872807/hadoop-datanodes-cannot-find-namenode
> ).
>
> Basically if I add that line Hadoop won't work, but if I keep the file
> without the loopback address I get the above error.
> What can I do? Which is the right configuration?
>
>
> Thanks,
> Alberto
>
>
>
>
> --
> Alberto Cordioli
>