Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - Error about rs block seek


+
Bing Jiang 2013-05-13, 01:25
+
Ted Yu 2013-05-13, 01:30
+
Bing Jiang 2013-05-13, 01:38
+
Ted Yu 2013-05-13, 01:46
+
Jean-Marc Spaggiari 2013-05-13, 01:44
+
Bing Jiang 2013-05-13, 01:47
+
Bing Jiang 2013-05-13, 01:50
+
ramkrishna vasudevan 2013-05-13, 03:04
+
Bing Jiang 2013-05-13, 06:30
Copy link to this message
-
Re: Error about rs block seek
Anoop John 2013-05-13, 07:47
> Current pos = 32651;
currKeyLen = 45; currValLen = 80; block limit = 32775

This means after the cur position we need to have atleast  45+80+4(key
length stored as 4 bytes) +4(value length 4 bytes)
So atleast 32784 should have been the limit.  If we have memstoreTS also
written with this KV some more bytes..

Do u use Hbase handled checksum?

-Anoop-

On Mon, May 13, 2013 at 12:00 PM, Bing Jiang <[EMAIL PROTECTED]>wrote:

> Hi,all
> Before the exception stack, there is an Error log:
> 2013-05-13 00:00:14,491 ERROR
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2: Current pos = 32651;
> currKeyLen = 45; currValLen = 80; block limit = 32775; HFile name > 1f96183d55144c058fa2a05fe5c0b814; currBlock currBlockOffset = 33550830
>
> And the operation is scanner's next.
> Current pos + currKeyLen + currValLen > block limit
> 32651+45 +80 = 32776 > 32775 , and in my table configs, set blocksize
> 32768, and when I change the value from blocksize from 64k(default value)
> to 32k, so many error logs being found.
>
> I use 0.94.3, can someone tell me the influence of blocksize setting.
>
> Tks.
>
>
>
>
> 2013/5/13 ramkrishna vasudevan <[EMAIL PROTECTED]>
>
> > Your TTL is negative here 'TTL => '-1','.
> >
> > Any reason for it to be negative? This could be a possible reason.  Not
> > sure..
> >
> > Regards
> > Ram
> >
> >
> > On Mon, May 13, 2013 at 7:20 AM, Bing Jiang <[EMAIL PROTECTED]
> > >wrote:
> >
> > > hi, Ted.
> > >
> > > No data block encoding, our table config below:
> > >
> > > User Table Description
> > > CrawlInfo<http://10.100.12.33:8003/table.jsp?name=CrawlInfo> {NAME
> > > => 'CrawlInfo', DEFERRED_LOG_FLUSH => 'true', MAX_FILESIZE =>
> > > '34359738368', FAMILIES => [{NAME => 'CrawlStats', BLOOMFILTER =>
> > 'ROWCOL',
> > > CACHE_INDEX_ON_WRITE => 'true', TTL => '-1', CACHE_DATA_ON_WRITE =>
> > 'true',
> > > CACHE_BLOOMS_ON_WRITE => 'true', VERSIONS => '1', BLOCKSIZE =>
> '32768'}]}
> > >
> > >
> > >
> > > 2013/5/13 Bing Jiang <[EMAIL PROTECTED]>
> > >
> > > > Hi, JM.
> > > > Our jdk version is 1.6.0_38
> > > >
> > > >
> > > > 2013/5/13 Jean-Marc Spaggiari <[EMAIL PROTECTED]>
> > > >
> > > >> Hi Bing,
> > > >>
> > > >> Which JDK are you using?
> > > >>
> > > >> Thanks,
> > > >>
> > > >> JM
> > > >>
> > > >> 2013/5/12 Bing Jiang <[EMAIL PROTECTED]>
> > > >>
> > > >> > Yes, we use hbase-0.94.3 , and  we change block.size from 64k to
> > 32k.
> > > >> >
> > > >> >
> > > >> > 2013/5/13 Ted Yu <[EMAIL PROTECTED]>
> > > >> >
> > > >> > > Can you tell us the version of hbase you are using ?
> > > >> > > Did this problem happen recently ?
> > > >> > >
> > > >> > > Thanks
> > > >> > >
> > > >> > > On May 12, 2013, at 6:25 PM, Bing Jiang <
> [EMAIL PROTECTED]
> > >
> > > >> > wrote:
> > > >> > >
> > > >> > > > Hi, all.
> > > >> > > > In our hbase cluster, there are many logs like below:
> > > >> > > >
> > > >> > > > 2013-05-13 00:00:04,161 ERROR
> > > >> > > org.apache.hadoop.hbase.regionserver.HRegionServer:
> > > >> > > > java.lang.IllegalArgumentException
> > > >> > > >         at java.nio.Buffer.position(Buffer.java:216)
> > > >> > > >         at
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.blockSeek(HFileReaderV2.java:882)
> > > >> > > >         at
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:753)
> > > >> > > >         at
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:487)
> > > >> > > >         at
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:501)
> > > >> > > >         at
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
> > > >> > > >         at
> > > >> > >
+
Bing Jiang 2013-05-13, 08:06
+
Anoop John 2013-05-13, 08:27
+
ramkrishna vasudevan 2013-05-13, 08:34