Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> SplitLogManager issue


Copy link to this message
-
Re: SplitLogManager issue
Seems that I also met a similar issue complaining about this:

org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to
find region for *t_system_rec,,99999999999999* after 10 tries.
     at
org.apache.hadoop.hbase.client.HTableFactory.createHTableInterface(HTableFactory.java:38)
     at
org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:265)
     at
org.apache.hadoop.hbase.client.HTablePool.findOrCreateTable(HTablePool.java:195)
     at
org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:174)

But later after restart and rerunning application, it disappeared.

于 9/26/2013 11:03 AM, Ted Yu 写道:
> Can you check NameNode log ?
>
> What Hadoop / HBase releases are you using ?
>
> Thanks
>
> On Sep 25, 2013, at 7:52 PM, kun yan <[EMAIL PROTECTED]> wrote:
>
>> i check regionserver logs
>> What should I do, I only know a little bit HLog
>>
>> 2013-09-26 10:37:13,478 WARN org.apache.hadoop.hbase.util.FSHDFSUtils:
>> Cannot recoverLease after trying for 900000ms
>> (hbase.lease.recovery.timeout); continuing, but may be DATALOSS!!!;
>> attempt=16 on
>> file=hdfs://hydra0001:8020/hbase/.logs/hydra0006,60020,1379926437471-splitting/hydra0006%2C60020%2C1379926437471.1380157500804
>> after 921109ms
>> 2013-09-26 10:37:13,519 WARN org.apache.hadoop.hbase.regionserver.wal.HLog:
>> Lease should have recovered. This is not expected. Will retry
>> java.io.IOException: Cannot obtain block length for
>> LocatedBlock{BP-1087715125-192.5.1.50-1378889582109:blk_-8658284328699269340_21570;
>> getBlockSize()=0; corrupt=false; offset=0; locs=[192.5.1.56:50010,
>> 192.5.1.52:50010, 192.5.1.55:50010]}
>>         at
>> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:319)
>>         at
>> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:263)
>>         at
>> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:205)
>>         at
>> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:198)
>>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1117)
>>         at
>> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
>>         at
>> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1787)
>>         at
>> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:62)
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1707)
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1728)
>>         at
>> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:55)
>>         at
>> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:177)
>>         at
>> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:713)
>>         at
>> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:846)
>>         at
>> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:759)
>>         at
>> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:403)
>>         at
>> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:371)
>>         at
>> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:115)
>>         at
>> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:283)
>>         at
>> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:214)
>>         at
>> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:182)
>>         at java.lang.Thread.run(Thread.java:722)
>> 2013-09-26 10:40:05,900 DEBUG
>> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=2.02 MB,
Best Regards, Julian
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB