Thank you all for the comments
setting dfs.datanode.dns.interface and having private ip's in slaves ans
masters file didn't work.
So as Alex said I changed all public ip mapping to hostnames on /etc/hosts
file, and all datanodes now communicate through private network.
but im not fully content since on some situations I would want hostnames to
be mapped to public ips, and hadoop still communication through private
network. I don't understand why dfs.datanode.dns.interface has no effect.
One interesting thing i found is that if I change dfs.default.name to
public ip from private one, all datanodes now report themselves with public
so confusing. why?
btw, im using hadoop 1.0.3, without nameserver and firewalls
On Fri, Jul 12, 2013 at 12:29 PM, Alex Levin <[EMAIL PROTECTED]> wrote:
> make sure that your hostnames resolved ( dns or/and hosts files ) with
> private IPs.
> if you have records in the nodes hosts files like
> "public IP" hosname
> remove (or comment) them
> On Jul 11, 2013 2:21 AM, "Ben Kim" <[EMAIL PROTECTED]> wrote:
>> Hello Hadoop Community!
>> I've setup datanodes with private network by adding private hostname's to
>> the slaves file.
>> but it looks like when i lookup on the webUI datenodes are registered
>> with public hostnames.
>> are they actually networking with public network?
>> all datanodes have eth0 with public address and eth1 with private address.
>> what am i missing?
>> Thanks a whole lot
>> *Benjamin Kim*
>> *benkimkimben at gmail*
*benkimkimben at gmail*