Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Error putting files in the HDFS


Copy link to this message
-
Re: Error putting files in the HDFS
As per your dfs report, available DataNodes  count is ZERO  in you cluster.

Please check your data node logs.

Regards
Jitendra

On 10/8/13, Basu,Indrashish <[EMAIL PROTECTED]> wrote:
>
> Hello,
>
> My name is Indrashish Basu and I am a Masters student in the Department
> of Electrical and Computer Engineering.
>
> Currently I am doing my research project on Hadoop implementation on
> ARM processor and facing an issue while trying to run a sample Hadoop
> source code on the same. Every time I am trying to put some files in the
> HDFS, I am getting the below error.
>
>
> 13/10/07 11:31:29 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes, instead
> of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:739)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>
> 13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
> bad datanode[0] nodes == null
> 13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
> put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only
> be replicated to 0 nodes, instead of 1
>
>
> I tried replicating the namenode and datanode by deleting all the old
> logs on the master and the slave nodes as well as the folders under
> /app/hadoop/, after which I formatted the namenode and started the
> process again (bin/start-all.sh), but still no luck with the same.
>
> I tried generating the admin report(pasted below) after doing the
> restart, it seems the data node is not getting started.
>
> -------------------------------------------------
> Datanodes available: 0 (0 total, 0 dead)
>
> root@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1# bin/hadoop
> dfsadmin -report
> Configured Capacity: 0 (0 KB)
> Present Capacity: 0 (0 KB)
> DFS Remaining: 0 (0 KB)
> DFS Used: 0 (0 KB)
> DFS Used%: �%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 0 (0 total, 0 dead)
>
>
> I have tried the following methods to debug the process :
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB