Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Could not obtain block


Copy link to this message
-
Re: Could not obtain block
May be some datanode  is down in the cluster ...check datanode logs of nodes
in cluster

On Thu, Jan 20, 2011 at 3:43 PM, Cavus,M.,Fa. Post Direkt <
[EMAIL PROTECTED]> wrote:

> Hi,
> I process the wordcount example on my hadoop cluster and get a Could not
> obtain block Exception. Did any one know what is the problem? If I start
> this program in my local than processed it good.
>
> I do this:
>
> root@master bin]# ./hadoop jar ../hadoop-0.20.2-examples.jar wordcount
> point/start-all.sh  s/start-all.sh
> 11/01/20 11:57:56 INFO input.FileInputFormat: Total input paths to
> process : 1
> 11/01/20 11:57:57 INFO mapred.JobClient: Running job:
> job_201101201036_0002
> 11/01/20 11:57:58 INFO mapred.JobClient:  map 0% reduce 0%
> 11/01/20 11:58:16 INFO mapred.JobClient: Task Id :
> attempt_201101201036_0002_m_000000_0, Status : FAILED
> java.io.IOException: Could not obtain block:
> blk_7716960257524845873_1708 file=/user/root/point/start-all.sh
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient
> .java:1812)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.ja
> va:1638)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767
> )
>        at java.io.DataInputStream.read(DataInputStream.java:83)
>        at
> org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
>        at
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(Line
> RecordReader.java:97)
>        at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(Ma
> pTask.java:423)
>        at
> org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>        at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> 11/01/20 11:58:33 INFO mapred.JobClient: Task Id :
> attempt_201101201036_0002_m_000000_1, Status : FAILED
> java.io.IOException: Could not obtain block:
> blk_7716960257524845873_1708 file=/user/root/point/start-all.sh
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient
> .java:1812)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.ja
> va:1638)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767
> )
>        at java.io.DataInputStream.read(DataInputStream.java:83)
>        at
> org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
>        at
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(Line
> RecordReader.java:97)
>        at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(Ma
> pTask.java:423)
>        at
> org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>        at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> 11/01/20 11:58:48 INFO mapred.JobClient: Task Id :
> attempt_201101201036_0002_m_000000_2, Status : FAILED
> java.io.IOException: Could not obtain block:
> blk_7716960257524845873_1708 file=/user/root/point/start-all.sh
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient
> .java:1812)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.ja
> va:1638)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767
> )
>        at java.io.DataInputStream.read(DataInputStream.java:83)
>        at
> org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
>        at
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(Line
> RecordReader.java:97)
>        at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(Ma
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB