武泽胜 2013-07-29, 13:21
One reason is the lists to accept or reject DN accepts hostnames. If dns temporarily can't resolve an IP then an unauthorized DN might slip back into the cluster, or a decommissioning node might go back into service.
On Jul 29, 2013, at 8:21 AM, 武泽胜 wrote:
I have the same confusion, anyone who can reply to this will be very appreciated.
From: Elazar Leibovich <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>
Reply-To: "[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>" <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>
Date: Thursday, July 25, 2013 3:51 AM
To: user <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>
Subject: Why Hadoop force using DNS?
Looking at Hadoop source you can see that Hadoop relies on the fact each node has resolvable name.
For example, Hadoop 2 namenode reverse look the up of each node that connects to it. Also, there's no way way to tell a database to advertise an UP as it's address. Setting datanode.network.interface to, say, eth1, would cause Hadoop to reverse lookup UPs on eth1 and advertise the result.
Why is that? Using plain IPs is simple to set up, and I can't see a reason not to support them?
Elazar Leibovich 2013-07-29, 15:11
Greg Bledsoe 2013-07-29, 14:40
Chris Embree 2013-07-29, 14:45
Greg Bledsoe 2013-07-29, 15:33
Elazar Leibovich 2013-07-29, 14:50