Hi Jitendra,

I realized that some days back ,my cluster was down due to power failure after which nn/current directory has : edits, edits.new file and now SNN is not rolling these edits due to HTTP error.
Also currently my NN and SNN are operating on same machine
DFSadmin report :

Configured Capacity: 659494076416 (614.2 GB)
Present Capacity: 535599210496 (498.82 GB)
DFS Remaining: 497454006272 (463.29 GB)
DFS Used: 38145204224 (35.53 GB)
DFS Used%: 7.12%
Under replicated blocks: 283
Blocks with corrupt replicas: 3
Missing blocks: 3

-------------------------------------------------
Datanodes available: 8 (8 total, 0 dead)

Name: 10.139.9.238:50010
Decommission Status : Normal
Configured Capacity: 82436759552 (76.78 GB)
DFS Used: 4302274560 (4.01 GB)
Non DFS Used: 8391843840 (7.82 GB)
DFS Remaining: 69742641152(64.95 GB)
DFS Used%: 5.22%
DFS Remaining%: 84.6%
Last contact: Fri Jan 31 18:55:18 IST 2014
Name: 10.139.9.233:50010
Decommission Status : Normal
Configured Capacity: 82436759552 (76.78 GB)
DFS Used: 5774745600 (5.38 GB)
Non DFS Used: 13409488896 (12.49 GB)
DFS Remaining: 63252525056(58.91 GB)
DFS Used%: 7.01%
DFS Remaining%: 76.73%
Last contact: Fri Jan 31 18:55:19 IST 2014
Name: 10.139.9.232:50010
Decommission Status : Normal
Configured Capacity: 82436759552 (76.78 GB)
DFS Used: 8524451840 (7.94 GB)
Non DFS Used: 24847884288 (23.14 GB)
DFS Remaining: 49064423424(45.69 GB)
DFS Used%: 10.34%
DFS Remaining%: 59.52%
Last contact: Fri Jan 31 18:55:21 IST 2014
Name: 10.139.9.236:50010
Decommission Status : Normal
Configured Capacity: 82436759552 (76.78 GB)
DFS Used: 4543819776 (4.23 GB)
Non DFS Used: 8669548544 (8.07 GB)
DFS Remaining: 69223391232(64.47 GB)
DFS Used%: 5.51%
DFS Remaining%: 83.97%
Last contact: Fri Jan 31 18:55:19 IST 2014
Name: 10.139.9.235:50010
Decommission Status : Normal
Configured Capacity: 82436759552 (76.78 GB)
DFS Used: 5092986880 (4.74 GB)
Non DFS Used: 8669454336 (8.07 GB)
DFS Remaining: 68674318336(63.96 GB)
DFS Used%: 6.18%
DFS Remaining%: 83.31%
Last contact: Fri Jan 31 18:55:19 IST 2014
Name: 10.139.9.237:50010
Decommission Status : Normal
Configured Capacity: 82436759552 (76.78 GB)
DFS Used: 4604301312 (4.29 GB)
Non DFS Used: 11005788160 (10.25 GB)
DFS Remaining: 66826670080(62.24 GB)
DFS Used%: 5.59%
DFS Remaining%: 81.06%
Last contact: Fri Jan 31 18:55:18 IST 2014
Name: 10.139.9.234:50010
Decommission Status : Normal
Configured Capacity: 82436759552 (76.78 GB)
DFS Used: 4277760000 (3.98 GB)
Non DFS Used: 12124221440 (11.29 GB)
DFS Remaining: 66034778112(61.5 GB)
DFS Used%: 5.19%
DFS Remaining%: 80.1%
Last contact: Fri Jan 31 18:55:18 IST 2014
Name: 10.139.9.231:50010
Decommission Status : Normal
Configured Capacity: 82436759552 (76.78 GB)
DFS Used: 1024864256 (977.39 MB)
Non DFS Used: 36776636416 (34.25 GB)
DFS Remaining: 44635258880(41.57 GB)
DFS Used%: 1.24%
DFS Remaining%: 54.14%
Last contact: Fri Jan 31 18:55:20 IST 2014

From: Jitendra Yadav [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 31, 2014 6:58 PM
To: user
Subject: Re: java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?getimage=1

Hi,

Please post the output of dfs report command, this could help us to understand cluster health.

# hadoop dfsadmin -report

Thanks
Jitendra

On Fri, Jan 31, 2014 at 6:44 PM, Stuti Awasthi <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Hi All,

I am suddenly started facing issue on Hadoop Cluster. Seems like HTTP request at port 50070 on dfs is not working properly.
I have an Hadoop cluster which is operating from several days. Recently we are also not able to see dfshealth.jsp page from webconsole.

Problems :
1. http://<Hostname>:50070/dfshealth.jsp<http://%3cHostname%3e:50070/dfshealth.jsp> shows following error

HTTP ERROR: 404
Problem accessing /. Reason:
NOT_FOUND

2. SNN is not able to roll edits :
ERROR in SecondaryNameNode Log
java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?getimage=1
       at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1401)
       at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:160)
       at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$3.run(SecondaryNameNode.java:347)
       at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$3.run(SecondaryNameNode.java:336)
       at java.security.AccessController.doPrivileged(Native Method)
       at javax.security.auth.Subject.doAs(Subject.java:416)
       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
       at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.downloadCheckpointFiles(SecondaryNameNode.java:336)
       at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:411)
       at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:312)
       at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:275)

ERROR in Namenode Log
2014-01-31 18:15:12,046 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 10.139.9.231
2014-01-31 18:15:12,046 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Cannot roll edit log, edits.new files already exists in all healthy directories:
  /usr/lib/hadoop/storage/dfs/nn/current/edits.new

Namenode logs which suggest that webserver is started on 50070 successfully:
2014-01-31 14:42:35,208 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2014-01-31 14:42:35,209 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2014-01-31 14:42:35,209 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2014-01-31 14:42:35,378 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: HOSTNAME:50070
Hdfs-site
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB