-Re: Namenode trying to connect to localhost instead of the name and dying
Thank you, Eric, thank you, Bibek.
/etc/hosts was part of the problem, and then after some re-install commands
it just started working :)
Pleasure == working Hadoop cluster (even if it is pseudo-pleasure)
On Wed, Mar 2, 2011 at 5:09 PM, Bibek Paudel <[EMAIL PROTECTED]> wrote:
> On Thu, Mar 3, 2011 at 12:08 AM, Eric Sammer <[EMAIL PROTECTED]> wrote:
> > Check your /etc/hosts file and make sure the hostname of the machine is
> > on the loopback device. This is almost always the cause of this.
> > On Wed, Mar 2, 2011 at 5:57 PM, Mark Kerzner <[EMAIL PROTECTED]>
> >> Hi,
> >> I am doing a pseudo-distributed mode on my laptop, following the same
> >> I used for all configurations on my regular cluster, but I get this
> >> 2011-03-02 16:45:13,651 INFO
> >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred
> >> 192.168.1.150 cmd=delete
> >> perm=null
> >> 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying
> >> to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
> >> so it should be connecting to 192.168.1.150, and it is instead
> >> to
> >> 127.0.1.1 - where does this ip come from?
> >> Thank you,
> >> Mark
> > --
> > Eric Sammer
> > twitter: esammer
> > data: www.cloudera.com