Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Could not obtain block?


Copy link to this message
-
Could not obtain block?
Hi,

I have a situation here that I'm wondering where it's coming from. I know
that I'm using a pretty old version... 1.0.3

When I fsck a file, I can see tha there is one block but when I try to get
the file, I'm not able to retrieve this block. I tought it was because the
file was opened, so I killed all the process related to this file, but
still can not access it. (Logs above). I tried to stop and restart hadoop
but still, same issue. Worse case I can "simply" delete this file, but my
goal is more to understand the situation.

Thanks,

JM

hadoop@node3:~/hadoop-1.0.3$ bin/hadoop fsck
/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597
FSCK started by hadoop from /192.168.23.7 for path
/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597
at Fri Nov 08 12:03:49 EST 2013
Status: HEALTHY
 Total size:    0 B (Total open files size: 1140681 B)
 Total dirs:    0
 Total files:    0 (Files currently being written: 1)
 Total blocks (validated):    0 (Total open file blocks (not validated): 1)
 Minimally replicated blocks:    0
 Over-replicated blocks:    0
 Under-replicated blocks:    0
 Mis-replicated blocks:        0
 Default replication factor:    3
 Average block replication:    0.0
 Corrupt blocks:        0
 Missing replicas:        0
 Number of data-nodes:        8
 Number of racks:        1
FSCK ended at Fri Nov 08 12:03:49 EST 2013 in 0 milliseconds
The filesystem under path
'/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597'
is HEALTHY
hadoop@node3:~/hadoop-1.0.3$ bin/hadoop fs -get
/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597
.
13/11/08 12:03:54 INFO hdfs.DFSClient: No node available for block:
blk_7436507983567155151_3155853
file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597
13/11/08 12:03:54 INFO hdfs.DFSClient: Could not obtain block
blk_7436507983567155151_3155853 from any node: java.io.IOException: No live
nodes contain current block. Will get new block locations from namenode and
retry...
13/11/08 12:03:57 INFO hdfs.DFSClient: No node available for block:
blk_7436507983567155151_3155853
file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597
13/11/08 12:03:57 INFO hdfs.DFSClient: Could not obtain block
blk_7436507983567155151_3155853 from any node: java.io.IOException: No live
nodes contain current block. Will get new block locations from namenode and
retry...
13/11/08 12:04:00 INFO hdfs.DFSClient: No node available for block:
blk_7436507983567155151_3155853
file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597
13/11/08 12:04:00 INFO hdfs.DFSClient: Could not obtain block
blk_7436507983567155151_3155853 from any node: java.io.IOException: No live
nodes contain current block. Will get new block locations from namenode and
retry...
13/11/08 12:04:03 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could
not obtain block: blk_7436507983567155151_3155853
file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597
    at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2269)
    at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2063)
    at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
    at java.io.DataInputStream.read(DataInputStream.java:100)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:87)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
    at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:248)
    at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:199)
    at org.apache.hadoop.fs.FsShell.run(FsShell.java:1769)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895)

get: Could not obtain block: blk_7436507983567155151_3155853
file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597
hadoop@node3:~/hadoop-1.0.3$
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB