Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Can not access HBase Shell.


Copy link to this message
-
Re: Can not access HBase Shell.
I've done several reinstallation's and hadoop seems to be fine. However, I
still get similar error when I tried to access HBase shell.

$ jps
274 NameNode
514 JobTracker
1532 HMaster
1588 Jps
604 TaskTracker
450 SecondaryNameNode
362 DataNode

$ ./bin/hbase shell
Trace/BPT trap: 5

I looked at the log file and found errors in the HMaster node logs:

2012-09-17 17:06:54,384 INFO
org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=2,
memsize=360.0, into tmp file
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/0212db15465842b38cc63eb9ef8b73d2
2012-09-17 17:06:54,389 WARN org.apache.hadoop.hdfs.DFSClient:
Exception while reading from blk_-8714444718437861427_1016 of
/hbase/-ROOT-/70236052/.tmp/0212db15465842b38cc63eb9ef8b73d2 from
127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
        at
org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
        at
org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
        at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
        at
org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
        at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
        at
org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1457)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2172)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:582)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1364)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1869)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1637)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1286)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1294)
        at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:137)
        at
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:533)
        at
org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:563)
        at
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1252)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:516)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:606)
        at
org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1590)
        at
org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:769)
        at
org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:108)
        at
org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.java:2204)
        at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1429)
        at
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2685)
        at
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:535)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3682)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3630)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
        at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:636)

2012-09-17 17:06:54,389 INFO org.apache.hadoop.hdfs.DFSClient: Could
not obtain block blk_-8714444718437861427_1016 from any node:
java.io.IOException: No live nodes contain current block. Will get new
block locations from namenode and retry...

I checked the file system using fsck and that seems to be healthy:
./bin/hadoop fsck / -files
Warning: $HADOOP_HOME is deprecated.

FSCK started by jasonhuang from /192.168.1.124 for path / at Mon Sep 17
17:24:46 EDT 2012
/ <dir>
/hbase <dir>
/hbase/-ROOT- <dir>
/hbase/-ROOT-/.tableinfo.0000000001 727 bytes, 1 block(s):  OK
/hbase/-ROOT-/.tmp <dir>
/hbase/-ROOT-/70236052 <dir>
/hbase/-ROOT-/70236052/.logs <dir>
/hbase/-ROOT-/70236052/.logs/hlog.1347915355095 309 bytes, 1 block(s):  OK
/hbase/-ROOT-/70236052/.oldlogs <dir>
/hbase/-ROOT-/70236052/.regioninfo 109 bytes, 1 block(s):  OK
/hbase/-ROOT-/70236052/.tmp <dir>
/hbase/-ROOT-/70236052/.tmp/2f094a87dd314072b1eb464761639c0c 859 bytes, 1
block(s):  OK
/hbase/-ROOT-/70236052/info <dir>
/hbase/-ROOT-/70236052/recovered.edits <dir>
/hbase/-ROOT-/70236052/recovered.edits/0000000000000000002 310 bytes, 1
block(s):  OK
/hbase/.META. <dir>
/hbase/.META./1028785192 <dir>
/hbase/.META./1028785192/.logs <dir>
/hbase/.META./1028785192/.logs/hlog.1347915355190 134 bytes, 1 block(s):  OK
/hbase/.META./1028785192/.oldlogs <dir>
/hbase/.META./1028785192/.regioninfo 111 bytes, 1 block(s):  OK
/hbase/.META./1028785192/info <dir>
/hbase/.corrupt <dir>
/hbase/.logs <dir>
/hbase/.oldlogs <dir>
/hbase/.oldlogs/192.168.1.124%2C50887%2C1347915939955.1347915972194 134
bytes, 1 block(s):  OK
/hbase/.oldlogs/192.168.1.124%2C51177%2C1347916254506.1347916283458 134
bytes, 1 block(s):  OK
/hbase/hbase.id 38 bytes, 1 block(s):  OK
/hbase/hbase.version 3 bytes, 1 block(s):  OK
/hbase/splitlog <dir>
/test <dir>
/tmp <dir>
/tmp/hadoop-jasonhuang <dir>
/tmp/hadoop-jasonhuang/mapred <dir>
/tmp/
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB