Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> java.io.IOEcxeption key k1 followed by a smaller key k2


Copy link to this message
-
Re: java.io.IOEcxeption key k1 followed by a smaller key k2
Hi Lars,

That's exactly what I will be working on. I will update you as soon as I
have that code. I'm travelling today, so it might be towards the end of the
week.

Thanks for the help,
Mohamed

On Mon, Sep 17, 2012 at 12:06 AM, lars hofhansl <[EMAIL PROTECTED]> wrote:

> It would be good track this down. Any way you can share the tools you use
> to load the data?
> Is it easy for you to reproduce this problem?
>
> It's possible (but not likely) that there is a bug in Hadoop 1.0. You
> should use 1.0.3.
>
> -- Lars
>
>
>
> ________________________________
>  From: Mohamed Ibrahim <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]; lars hofhansl <[EMAIL PROTECTED]>; Stack <
> [EMAIL PROTECTED]>
> Sent: Sunday, September 16, 2012 7:13 PM
> Subject: Re: java.io.IOEcxeption key k1 followed by a smaller key k2
>
>
> Hello Lars / Stack,
>
> Thank you for responding.
>
> The date on the files is March 9th 2012. It's been up since then, I
> restarted hbase and hadoop once. I only have a single node that I'm running
> my tests on. I'm currently running 0.92.1 on hadoop 1.0 . I hope I'm using
> the correct mix.
>
> I'm not using any external tools other than hbase java api, and inspecting
> the data using the shell. I ran my program, the following day I found the
> stack dump on the console. I checked by scanning the table that had the
> exception for the smaller key k1, with limit 2 from the shell and no
> exceptions were thrown. I can also see that the following key is larger
> than the first one, so nothing is wrong.
>
> I faced the same exception before and it happened more frequently when I
> did a lot of Puts and Deletes using Htable.batch . Essentially I was
> updating several inverted indexes each in its own table on the data of rows
> when they get inserted, and instead of doing single Deletes & Puts on the
> indexes I used batch. batch improved the performance but threw this
> exception more. I stopped using batch and now doing single Puts & Deletes.
> The exception rarely gets thrown, but still sporadically gets thrown.
>
> I will read about the hfile tool, thanks for the pointers. I will also try
> figure out a set of steps that would repeat the exception so it is more
> helpful. I will also try 0.94.1 with batch and see if it will happen again,
> and will let you know and will file a bug if I can repeat it consistently.
>
> Thank you,
> Mohamed Ibrahim
>
>
> On Sun, Sep 16, 2012 at 7:33 PM, lars hofhansl <[EMAIL PROTECTED]>
> wrote:
>
> Hmm... HBASE-6579 gets rid of that check, because we thought it no longer
> necessary.
> >No do you remember what you did leading up to this?
> >Did you write these HFiles with some other tool? Done some bulk import,
> etc?
> >
> >
> >-- Lars
> >
> >
> >
> >________________________________
> > From: Mohamed Ibrahim <[EMAIL PROTECTED]>
> >To: [EMAIL PROTECTED]
> >Sent: Sunday, September 16, 2012 5:59 AM
> >Subject: java.io.IOEcxeption key k1 followed by a smaller key k2
> >
> >
> >Hello All,
> >
> >I am using hbase 0.92.1 on hadoop 1 . I am getting those exceptions, and
> it
> >seems to me that it means that the hbase file is not sorted in order. So
> >when the scanner goes through it, it finds a smaller key after its
> current.
> >
> >Is that related to https://issues.apache.org/jira/browse/HBASE-6579 ??
> >
> >It looks like upgrading to 0.94.1 (current stable) won't fix the issue.
> Any
> >recommendations ??
> >
> >Here is the stack dump:
> >        at
>
> >org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:266)
> >        at
>
> >org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
> >        at
>
> >org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
> >        at
>
> >org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
> >        at
>
> >org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
> >        at
>
> >org.apache.hadoop.hbase.regionserver.HRegion.getLastIncrement(HRegion.java:3660)