Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Error about rs block seek


Copy link to this message
-
Re: Error about rs block seek
hi, Anoop.
I do not handle or change the hbase checksum.

So I want to know if I set block size at the beginning of creating tables,
does something make troubles?
2013/5/13 Anoop John <[EMAIL PROTECTED]>

> > Current pos = 32651;
> currKeyLen = 45; currValLen = 80; block limit = 32775
>
> This means after the cur position we need to have atleast  45+80+4(key
> length stored as 4 bytes) +4(value length 4 bytes)
> So atleast 32784 should have been the limit.  If we have memstoreTS also
> written with this KV some more bytes..
>
> Do u use Hbase handled checksum?
>
> -Anoop-
>
> On Mon, May 13, 2013 at 12:00 PM, Bing Jiang <[EMAIL PROTECTED]
> >wrote:
>
> > Hi,all
> > Before the exception stack, there is an Error log:
> > 2013-05-13 00:00:14,491 ERROR
> > org.apache.hadoop.hbase.io.hfile.HFileReaderV2: Current pos = 32651;
> > currKeyLen = 45; currValLen = 80; block limit = 32775; HFile name > > 1f96183d55144c058fa2a05fe5c0b814; currBlock currBlockOffset = 33550830
> >
> > And the operation is scanner's next.
> > Current pos + currKeyLen + currValLen > block limit
> > 32651+45 +80 = 32776 > 32775 , and in my table configs, set blocksize
> > 32768, and when I change the value from blocksize from 64k(default value)
> > to 32k, so many error logs being found.
> >
> > I use 0.94.3, can someone tell me the influence of blocksize setting.
> >
> > Tks.
> >
> >
> >
> >
> > 2013/5/13 ramkrishna vasudevan <[EMAIL PROTECTED]>
> >
> > > Your TTL is negative here 'TTL => '-1','.
> > >
> > > Any reason for it to be negative? This could be a possible reason.  Not
> > > sure..
> > >
> > > Regards
> > > Ram
> > >
> > >
> > > On Mon, May 13, 2013 at 7:20 AM, Bing Jiang <[EMAIL PROTECTED]
> > > >wrote:
> > >
> > > > hi, Ted.
> > > >
> > > > No data block encoding, our table config below:
> > > >
> > > > User Table Description
> > > > CrawlInfo<http://10.100.12.33:8003/table.jsp?name=CrawlInfo> {NAME
> > > > => 'CrawlInfo', DEFERRED_LOG_FLUSH => 'true', MAX_FILESIZE =>
> > > > '34359738368', FAMILIES => [{NAME => 'CrawlStats', BLOOMFILTER =>
> > > 'ROWCOL',
> > > > CACHE_INDEX_ON_WRITE => 'true', TTL => '-1', CACHE_DATA_ON_WRITE =>
> > > 'true',
> > > > CACHE_BLOOMS_ON_WRITE => 'true', VERSIONS => '1', BLOCKSIZE =>
> > '32768'}]}
> > > >
> > > >
> > > >
> > > > 2013/5/13 Bing Jiang <[EMAIL PROTECTED]>
> > > >
> > > > > Hi, JM.
> > > > > Our jdk version is 1.6.0_38
> > > > >
> > > > >
> > > > > 2013/5/13 Jean-Marc Spaggiari <[EMAIL PROTECTED]>
> > > > >
> > > > >> Hi Bing,
> > > > >>
> > > > >> Which JDK are you using?
> > > > >>
> > > > >> Thanks,
> > > > >>
> > > > >> JM
> > > > >>
> > > > >> 2013/5/12 Bing Jiang <[EMAIL PROTECTED]>
> > > > >>
> > > > >> > Yes, we use hbase-0.94.3 , and  we change block.size from 64k to
> > > 32k.
> > > > >> >
> > > > >> >
> > > > >> > 2013/5/13 Ted Yu <[EMAIL PROTECTED]>
> > > > >> >
> > > > >> > > Can you tell us the version of hbase you are using ?
> > > > >> > > Did this problem happen recently ?
> > > > >> > >
> > > > >> > > Thanks
> > > > >> > >
> > > > >> > > On May 12, 2013, at 6:25 PM, Bing Jiang <
> > [EMAIL PROTECTED]
> > > >
> > > > >> > wrote:
> > > > >> > >
> > > > >> > > > Hi, all.
> > > > >> > > > In our hbase cluster, there are many logs like below:
> > > > >> > > >
> > > > >> > > > 2013-05-13 00:00:04,161 ERROR
> > > > >> > > org.apache.hadoop.hbase.regionserver.HRegionServer:
> > > > >> > > > java.lang.IllegalArgumentException
> > > > >> > > >         at java.nio.Buffer.position(Buffer.java:216)
> > > > >> > > >         at
> > > > >> > >
> > > > >> >
> > > > >>
> > > >
> > >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.blockSeek(HFileReaderV2.java:882)
> > > > >> > > >         at
> > > > >> > >
> > > > >> >
> > > > >>
> > > >
> > >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:753)
> > > > >> > > >         at
> > > > >> > >
> > > > >> >
> > > > >>
> > > >
> > >

Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: http://blog.sina.com.cn/jiangbinglover
National Research Center for Intelligent Computing Systems
Institute of Computing technology
Graduate University of Chinese Academy of Science
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB