-Re: When I connect to Hadoop it always 'retrying connect', something wrong with my configurations?
Or maybe its a flaky network connection? Perhaps you can do a ping and
check the network link is reliable?
The only daemon that needs to be up is Namenode and unless you are taking
it down and bringing it back up often (please don't), you should not see
2012/4/25 Lukáš Kryške <[EMAIL PROTECTED]>
> I am getting this message if I try to process some data in HDFS but the
> necessary daemons are not started prior to my HDFS request (I am using
> Hadoop V0.20.2 so I am using /start-all.sh script). You're writting it
> works after some time so I guess the problem is in the communication
> between HDFS daemons in your Hadoop cluster.
> Is your HDFS formatted for all machines or have you add some nodes already
> into the formatted cluster?
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: When I connect to Hadoop it always 'retrying connect', something
> wrong with my configurations?
> Date: Wed, 25 Apr 2012 16:04:58 +0800
> I am running a Hadoop cluster of about 40 machines, and I got some problem
> with the HDFS.
> When I try to connect to the HDFS, say using the 'hadoop fs -ls /'
> command, sometimes it needs to retry connect to HDFS:
> 12/04/25 11:22:01 INFO ipc.Client: Retrying connect to server: master/10.10.10.51:8020. Already tried 0 time(s).
> And then it can connect to HDFS and return the result. But the process
> costs time and I am looking for some way to solve it.
> Is there something wrong with my Hadoop configurations? Or something wrong
> with the networking?