Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Namenode service not running on the Configured IP address


Copy link to this message
-
Re: Namenode service not running on the Configured IP address
anil gupta 2012-01-31, 22:37
Hi Harsh/Praveenesh,

Thanks for the reply guys.

I forgot to mention that /etc/hosts was having the ip to dns mapping and
Yes, i deliberately used IP address in the configuration because i thought
that if i use host-name then how would hadoop know about the network
interface on which it is supposed to run. In the initial configuration i
have used hostname only.

Here is the content of /etc/hosts file:
172.18.164.52  ihub-namenode1  # Added by NetworkManager
127.0.0.1       localhost.localdomain   localhost
::1     ihub-namenode1  localhost6.localdomain6 localhost6
172.18.164.52   ihub-dhcp-namenode1
192.168.1.98    ihub-jobtracker1
192.168.1.99    ihub-namenode1
192.168.1.100   ihub-dn-b1
192.168.1.101   ihub-dn-b2
192.168.1.102   ihub-dn-b3
192.168.1.103   ihub-dn-b4

I *fixed this problem* by commenting the line highlighted above in red in
/etc/hosts file. This is a work around. Now, i will have to comment that
line everytime whenever system reboots or network service is restarted.
Does anyone knows a standard way to stop system updating /etc/hosts file?

Thanks,
Anil

On Mon, Jan 30, 2012 at 9:21 PM, Harsh J <[EMAIL PROTECTED]> wrote:

> What does "host 192.168.1.99" output?
>
> (Also, slightly OT, but you need to fix this:)
>
> Do not use IPs in your fs location. Do the following instead:
>
> 1. Append an entry to /etc/hosts, across all nodes:
>
> 192.168.1.99 nn-host.remote nn-host
>
> 2. Set fs.default.name to "hdfs://nn-host.remote"
>
> On Tue, Jan 31, 2012 at 3:18 AM, anil gupta <[EMAIL PROTECTED]> wrote:
> > Hi All,
> >
> > I am using hadoop-0.20.2 and doing a fresh installation of a distributed
> > Hadoop cluster along with Hbase.I am having virtualized nodes running
> > on top of VMwareESXi5.0 server.
> >
> > The VM on which namenode is running has two network interfaces.
> >  1.  HWaddr 00:0C:29:F8:59:5C
> >      IP address:192.168.1.99
> >
> > 2.  HWaddr: 00:0C:29:F8:59:52
> >     IP address:172.18.164.52
> >
> > Here is the core-site.xml file:
> > <property>
> > <name>fs.default.name</name>
> > <value>hdfs://192.168.1.99:8020</value>
> > </property>
> >
> > As per the above configuration the namenode service should be running
> > on 192.168.1.99 but it keeps on running on the IP address: 172.18.164.52
> >
> > Am i missing any configuration parameters over here?
> > Is there anyway to bind Hadoop services to specific ethernet card?
> >
> > Thanks in advance for help.
> > -Anil Gupta
>
>
>
> --
> Harsh J
> Customer Ops. Engineer, Cloudera
>

--
Thanks & Regards,
Anil Gupta