JD, its a big problem. The region server holding .META has 2X the network
traffic and 2X the cpu load, I can easily spot the region server holding
.META. by just looking at the ganglia graphs of the region servers side by
side - I don't need to go the master console. So we can't scale up the
cluster or add more load since its bottlenecked on this one region server.
Thanks Nicholas for the pointer, its seems quite probable that this is the
issue - it was fixed with 0.94.8 so we don't have it. I will give it a shot.
On Mon, Jul 29, 2013 at 10:43 AM, Nicolas Liochon <[EMAIL PROTECTED]> wrote:
> It could be HBASE-6870?
> On Mon, Jul 29, 2013 at 7:37 PM, Jean-Daniel Cryans <[EMAIL PROTECTED]
> > Can you tell who's doing it? You could enable IPC debug for a few secs
> > to see who's coming in with scans.
> > You could also try to disable pre-fetching, set
> > hbase.client.prefetch.limit to 0
> > Also, is it even causing a problem or you're just worried it might
> > since it doesn't look "normal"?
> > J-D
> > On Mon, Jul 29, 2013 at 10:32 AM, Varun Sharma <[EMAIL PROTECTED]>
> > wrote:
> > > Hi folks,
> > >
> > > We are seeing an issue with hbase 0.94.3 on CDH 4.2.0 with excessive
> > .META.
> > > reads...
> > >
> > > In the steady state where there are no client crashes and there are no
> > > region server crashes/region movement, the server holding .META. is
> > serving
> > > an incredibly large # of read requests on the .META. table.
> > >
> > > From my understanding, in the steady state, region locations should be
> > > indefinitely cached in the client. The client is running a work load of
> > > multiput(s), puts, gets and coprocessor calls.
> > >
> > > Thanks
> > > Varun