Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Does Hadoop Require Public IP address to create a cluster.


Copy link to this message
-
Re: Does Hadoop Require Public IP address to create a cluster.
On Wed, Oct 17, 2012 at 2:04 PM, Sundeep Kambhmapati
<[EMAIL PROTECTED]> wrote:
> I am trying to install Hadoop 0.20.2 on a cluster on two virtual machines.
> One acting as master other as slave.
> I am able to ssh from master to slave and vice verse. But when I run
> start-dfs.sh namenode is not starting.
> I checked the namenode log it says:
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException:
> Problem binding to sk.r252.0/10.2.252.0:54310 : Cannot assign requested
> address
>
> 10.2.252.0 is the Private IP address of the Virtual machine in the cluster
> (master-sk.r252.0).
> Does Hadoop require that all the nodes in the cluster to have separate Pubic
> IP address  to setup hadoop cluster.

Hadoop does not require public IP addresses. I routinely run a
multinode cluster on 192.168.x.y using several VMs in a "NAT"
configuration.
This configuration works best if you have consistent "hostname" and IP
addresses.  I use <host0>.local, <host1>.local, etc using Avahi for IP
autoconfiguration in the .local TLD, but you can also hardcode IP
addresses in /etc/hosts on each node or use DNS.  Try "ping
<hostname>" to see if name lookup is working.

Your hostnames look very odd, having ".0" for your top-level domain is
likely to confuse things.  Try fixing it to be .internal or something
along those lines.

The "cannot assign requested address" looks like your namenode
configuration does not match the IP addresses on the node you're
trying to start the namenode on.  Double-check your master file,
/etc/hosts and "ifconfig -a" settings, and hdfs-site.xml settings.

-andy