Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Error in : hadoop fsck /


+
yogesh dhari 2012-09-11, 15:55
+
Hemanth Yamijala 2012-09-11, 15:59
+
yogesh dhari 2012-09-11, 16:03
Copy link to this message
-
Re: Error in : hadoop fsck /
Yogesh

try this

hadoop fsck -Ddfs.http.address=localhost:50070 /

50070 is the default http port that the namenode runs on. The property dfs.http.address should be set in your hdfs-site.xml

--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/

On Sep 11, 2012, at 9:03 AM, yogesh dhari <[EMAIL PROTECTED]> wrote:

> Hi Hemant,
>
> Its the content of core-site.xml
>
> <configuration>
> <property>
>          <name>fs.default.name</name>
>          <value>hdfs://localhost:9000</value>
>      </property>
>    <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/opt/hadoop-0.20.2/hadoop_temporary_dirr</value>
>     <description>A base for other temporary directories.</description>
> </property>
>
> </configuration>
>
> Regards
> Yogesh Kumar
>
> Date: Tue, 11 Sep 2012 21:29:36 +0530
> Subject: Re: Error in : hadoop fsck /
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
>
> Could you please review your configuration to see if you are pointing to the right namenode address ? (This will be in core-site.xml)
> Please paste it here so we can look for clues.
>
> Thanks
> hemanth
>
> On Tue, Sep 11, 2012 at 9:25 PM, yogesh dhari <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I am running hadoop-0.20.2 on single node cluster,
>
> I run the command
>
> hadoop fsck /
>
> it shows error:
>
> Exception in thread "main" java.net.UnknownHostException: http
>     at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
>     at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
>     at java.net.Socket.connect(Socket.java:579)
>     at java.net.Socket.connect(Socket.java:528)
>     at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
>     at sun.net.www.http.HttpClient.openServer(HttpClient.java:378)
>     at sun.net.www.http.HttpClient.openServer(HttpClient.java:473)
>     at sun.net.www.http.HttpClient.<init>(HttpClient.java:203)
>     at sun.net.www.http.HttpClient.New(HttpClient.java:290)
>     at sun.net.www.http.HttpClient.New(HttpClient.java:306)
>     at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995)
>     at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931)
>     at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849)
>     at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1299)
>     at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:123)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>     at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:159)
>
>
>
>
> Please suggest why it so..  it should show the health status:
>
>

+
Harsh J 2012-09-11, 16:46
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB