-Re: Problem with Hadoop and /etc/hosts file
Michel Segel 2012-09-17, 03:08
Just a hunch, w DNS, do you have your rDNS (reverse DNS lookup) set up correctly.
Sent from a remote device. Please excuse any typos...
On Sep 15, 2012, at 8:04 PM, Alberto Cordioli <[EMAIL PROTECTED]> wrote:
> This is the configuration I used till now...It works, but give the
> mentioned error (although the procedure seems to return correct
> results anyway.
> I think in /etc/hosts should be also the line
> 127.0.0.1 hostname
> but in that case Hadoop does not start.
> On 14 September 2012 18:19, Shumin Wu <[EMAIL PROTECTED]> wrote:
>> Would that work for you?
>> 127.0.0.1 localhost
>> 10.220.55.41 hostname
>> On Fri, Sep 14, 2012 at 6:18 AM, Alberto Cordioli <
>> [EMAIL PROTECTED]> wrote:
>>> I've successfully installed Apache HBase on a cluster with Hadoop.
>>> It works fine, but when I try to use Pig to load some data from an
>>> HBase table I get this error:
>>> ERROR org.apache.hadoop.hbase.mapreduce.TableInputFormatBase - Cannot
>>> resolve the host name for /10.220.55.41 because of
>>> javax.naming.OperationNotSupportedException: DNS service refused
>>> [response code 5]; remaining name '126.96.36.199.in-addr.arpa'
>>> Pig returns in any case the correct results (actually I don't know
>>> how), but I'd like to solve this issue.
>>> I discovered that this error is due to a mistake in /etc/hosts
>>> configuration file. In fact, as reported in the documentation, I
>>> should add the line
>>> 127.0.0.1 hostname
>>> But if I add this entry my Hadoop cluster does not start since the
>>> datanote is bind to the local address instead to the hostname/IP
>>> address. For this reason in many tutorial it's suggested to remove
>>> such entry (e.g.
>>> Basically if I add that line Hadoop won't work, but if I keep the file
>>> without the loopback address I get the above error.
>>> What can I do? Which is the right configuration?
>>> Alberto Cordioli
> Alberto Cordioli