Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> MapReduce output could not be written


+
Sudharsan Sampath 2011-07-05, 12:43
+
real great.. 2011-07-05, 12:45
+
Sudharsan Sampath 2011-07-05, 14:33
Copy link to this message
-
RE: MapReduce output could not be written
Check the datanode logs, whether it is registered with namenode or not. At
the same time you can check any problem occurred while initializing the
datanode. If it registers successfully it shows that data node in the live
nodes of the namenode UI.

    

 

Devaraj K

----------------------------------------------------------------------------
---------------------------------------------------------
This e-mail and its attachments contain confidential information from
HUAWEI, which
is intended only for the person or entity whose address is listed above. Any
use of the
information contained herein in any way (including, but not limited to,
total or partial
disclosure, reproduction, or dissemination) by persons other than the
intended
recipient(s) is prohibited. If you receive this e-mail in error, please
notify the sender by
phone or email immediately and delete it!ss

 

  _____  

From: Sudharsan Sampath [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 05, 2011 6:13 PM
To: [EMAIL PROTECTED]
Subject: MapReduce output could not be written

 

Hi,

In one of my jobs I am getting the following error.

java.io.IOException: File X could only be replicated to 0 nodes, instead of
1
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam
esystem.java:1282)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

and the job fails. I am running a single server that runs all the hadoop
daemons. So only one datanode in my scenario.

The datanode was up all the time.
There is enough space on the disk.
Even on debug level, I do not see any of the following logs
Node X " is not chosen because the node is (being) decommissioned
because the node does not have enough space
because the node is too busy
because the rack has too many chosen nodes

Do anyone know of anyother scenario in which occur ?

Thanks
Sudharsan S

+
Mostafa Gaber 2011-07-05, 19:58