Keith Wiley 2012-07-27, 18:22
anil gupta 2012-07-27, 18:30
Your NameNode is not up still. What does the NN logs say?
Sent from handheld, please excuse typos.
From: anil gupta <[EMAIL PROTECTED]>
Date: Fri, 27 Jul 2012 11:30:57
To: <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Retrying connect to server: localhost/127.0.0.1:9000.
Does ping to localhost returns a reply? Try telneting to localhost 9000.
On Fri, Jul 27, 2012 at 11:22 AM, Keith Wiley <[EMAIL PROTECTED]> wrote:
> I'm plagued with this error:
> Retrying connect to server: localhost/127.0.0.1:9000.
> I'm trying to set up hadoop on a new machine, just a basic
> pseudo-distributed setup. I've done this quite a few times on other
> machines, but this time I'm kinda stuck. I formatted the namenode without
> obvious errors and ran start-all.sh with no errors to stdout. However, the
> logs are full of that error above and if I attempt to access hdfs (ala
> "hadoop fs -ls /") I get that error again. Obviously, my core-site.xml
> sets fs.default.name to "hdfs://localhost:9000".
> I assume something is wrong with /etc/hosts, but I'm not sure how to fix
> it. If "hostname" returns X and "hostname -f" returns Y, then what are the
> corresponding entries in /etc/hosts?
> Thanks for any help.
> Keith Wiley [EMAIL PROTECTED] keithwiley.com
> "I used to be with it, but then they changed what it was. Now, what I'm
> isn't it, and what's it seems weird and scary to me."
> -- Abe (Grandpa) Simpson
Thanks & Regards,
Keith Wiley 2012-07-27, 20:53