Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS, mail # user - Problem accessing HDFS from a remote machine


Copy link to this message
-
Re: Problem accessing HDFS from a remote machine
Azuryy Yu 2013-04-09, 01:57
can you use command "jps" on your localhost to see if there is NameNode
process running?
On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <[EMAIL PROTECTED]> wrote:

> Yes, the namenode port is not open for your cluster. I had this problem
> to. First, log into your namenode and do netstat -nap to see what ports are
> listening. You can do service --status-all to see if the namenode service
> is running. Basically you need Hadoop to bind to the correct ip (an
> external one, or at least reachable from your remote machine). So listening
> on 127.0.0.1 or localhost or some ip for a private network will not be
> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
> files to configure the correct ip/ports.
>
> I'm no expert, so my understanding might be limited/wrong...but I hope
> this helps :)
>
> Best,
> B
>
>
> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <[EMAIL PROTECTED]>wrote:
>
>> Hi All,****
>>
>> ** **
>>
>> I have setup a single node cluster(release hadoop-1.0.4). Following is
>> the configuration used –****
>>
>> ** **
>>
>> *core-site.xml :-*
>>
>> ** **
>>
>> <property>****
>>
>>      <name>fs.default.name</name>****
>>
>>      <value>hdfs://localhost:54310</value> ****
>>
>> </property>****
>>
>> * *
>>
>> *masters:-*
>>
>> localhost****
>>
>> ** **
>>
>> *slaves:-*
>>
>> localhost****
>>
>> ** **
>>
>> I am able to successfully format the Namenode and perform files system
>> operations by running the CLIs on Namenode.****
>>
>> ** **
>>
>> But I am receiving following error when I try to access HDFS from a *remote
>> machine* – ****
>>
>> ** **
>>
>> $ bin/hadoop fs -ls /****
>>
>> Warning: $HADOOP_HOME is deprecated.****
>>
>> ** **
>>
>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>>
>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>>
>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>>
>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>>
>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>>
>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>>
>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>>
>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>>
>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>>
>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>>
>> Bad connection to FS. command aborted. exception: Call to
>> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
>> java.net.ConnectException: Connection refused****
>>
>> ** **
>>
>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
>> is also the configured value for “fs.default.name” in the core-site.xml
>> file on the remote machine.****
>>
>> ** **
>>
>> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
>> result in same output.****
>>
>> ** **
>>
>> Also, I am writing a C application using libhdfs to communicate with
>> HDFS. How do we provide credentials while connecting to HDFS?****
>>
>> ** **
>>
>> Thanks****
>>
>> Saurabh****
>>
>> ** **
>>
>> ** **
>>
>
>