Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Namenode trying to connect to localhost instead of the name and dying


Copy link to this message
-
Re: Namenode trying to connect to localhost instead of the name and dying
Thank you, Eric, thank you, Bibek.

/etc/hosts was part of the problem, and then after some re-install commands
it just started working :)

Pleasure == working Hadoop cluster (even if it is pseudo-pleasure)

Sincerely,
Mark

On Wed, Mar 2, 2011 at 5:09 PM, Bibek Paudel <[EMAIL PROTECTED]> wrote:

> On Thu, Mar 3, 2011 at 12:08 AM, Eric Sammer <[EMAIL PROTECTED]> wrote:
> > Check your /etc/hosts file and make sure the hostname of the machine is
> not
> > on the loopback device. This is almost always the cause of this.
> >
>
> +1
>
> -b
>
> > On Wed, Mar 2, 2011 at 5:57 PM, Mark Kerzner <[EMAIL PROTECTED]>
> wrote:
> >
> >> Hi,
> >>
> >> I am doing a pseudo-distributed mode on my laptop, following the same
> steps
> >> I used for all configurations on my regular cluster, but I get this
> error
> >>
> >> 2011-03-02 16:45:13,651 INFO
> >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred
> ip=/
> >> 192.168.1.150 cmd=delete
> >>
> src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.infodst=null
> >> perm=null
> >> 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> >> to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
> >>
> >> so it should be connecting to 192.168.1.150, and it is instead
> connecting
> >> to
> >> 127.0.1.1 - where does this ip come from?
> >>
> >> Thank you,
> >> Mark
> >>
> >
> >
> >
> > --
> > Eric Sammer
> > twitter: esammer
> > data: www.cloudera.com
> >
>