Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Never ending major compaction?


Copy link to this message
-
Re: Never ending major compaction?
Ted Yu 2013-01-02, 06:07
bq. It took about 6h to complete.

If the above behavior is reproducible, we should investigate more deeply.

Thanks for sharing.

On Tue, Jan 1, 2013 at 6:14 PM, Jean-Marc Spaggiari <[EMAIL PROTECTED]
> wrote:

> Yes, I'm running on 0.94.3. The last major compaction ran yesterday.
> It's almost daily. That's why I was surprise it took so long. I mean,
> I'm only compacting regions who moved. So it should be pretty quick.
> But was not the case. It took about 6h to complete. Strange. Maybe
> something went wrong when I stopped/started hbase.
>
> also, there was almost no activity on the network nor on the CPUs. I
> will have to add disks monitoring on Ganglia to see if I was limited
> by IOs...
>
> I looked at the regions logs and everything was fine. It was showing
> some compaction information every few seconds.
>
> JM
>
> 2013/1/1, Ted <[EMAIL PROTECTED]>:
> > You're on hbase 0.94.3 , right ?
> >
> > When was the last time major compaction ran ?
> >
> > Compaction is region server activity so you should be able to find some
> clue
> > in region server log.
> >
> > Cheers
> >
> > On Jan 1, 2013, at 11:42 AM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]>
> > wrote:
> >
> >> Hi,
> >>
> >> I have a table, with about 200 000 raws, keys is string[64] (ish) and
> >> value is string[512].
> >>
> >> It's splitted over 16 regions located on 7 regionservers.
> >>
> >> So it's not a big table, and there is a lot of horsepower behind it.
> >>
> >> I asked a major_compaction few hours ago. Let's say, about 5 hours
> >> ago. and it's still compacting! But all servers activities seems to be
> >> null. CPU usage is almost 0.
> >>
> >> There is nothing on the master logs.
> >>
> >> How can I see what's going on? Is there a way to see the compaction
> >> queue?
> >>
> >> JM
> >
>