Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> hbase-0.89/trunk: org.apache.hadoop.fs.ChecksumException: Checksum error

Copy link to this message
Re: hbase-0.89/trunk: org.apache.hadoop.fs.ChecksumException: Checksum error
but yesterday hbase was 0.20.6 and exceptions was different

from my previous email:
I need to massive data rewrite in some family on standalone server. I
got org.apache.hadoop.hbase.NotServingRegionException
or java.io.IOException: Region xxx closed if I write and read at the same time.

2010/9/22 Andrey Stepachev <[EMAIL PROTECTED]>:
> hp proliant raid 10 with 4 sas. 15k. smartarray 6i. 2cpu/4core.
> 2010/9/22 Ryan Rawson <[EMAIL PROTECTED]>:
>> generally checksum errors are due to hardware faults of one kind or another.
>> what is your hardware like?
>> On Wed, Sep 22, 2010 at 2:08 AM, Andrey Stepachev <[EMAIL PROTECTED]> wrote:
>>> But why it is bad? Split/compaction? I make my own RetryResultIterator
>>> which reopen scanner on timeout. But what is best way to reopen scanner.
>>> Can you point me where i can find all this exceptions? Or may be
>>> here already some sort for recoveratble iterator?
>>> 2010/9/22 Ryan Rawson <[EMAIL PROTECTED]>:
>>>> ah ok i think i get it... basically at this point your scanner is bad
>>>> and iterating on it again won't work.  the scanner should probably
>>>> self close itself so you get tons of additional exceptions but instead
>>>> we dont.
>>>> there is probably a better fix for this, i'll ponder
>>>> On Wed, Sep 22, 2010 at 1:57 AM, Ryan Rawson <[EMAIL PROTECTED]> wrote:
>>>>> very strange... looks like a bad block ended up in your scanner and
>>>>> subsequent nexts were failing due to that short read.
>>>>> did you have to kill the regionserver or did things recover and
>>>>> continue normally?
>>>>> -ryan
>>>>> On Wed, Sep 22, 2010 at 1:37 AM, Andrey Stepachev <[EMAIL PROTECTED]> wrote:
>>>>>> Hi All.
>>>>>> I get org.apache.hadoop.fs.ChecksumException for a table on heavy
>>>>>> write in standalone mode.
>>>>>> table tmp.bsn.main created 2010-09-22 10:42:28,860 and then 5 threads
>>>>>> writes data to it.
>>>>>> At some moment exception thrown.
>>>>>> Andrey.