Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - .META. region server DDOSed by too many clients


Copy link to this message
-
Re: .META. region server DDOSed by too many clients
ramkrishna vasudevan 2012-12-06, 10:59
Actually when we observed that our block cache was OFF... If possible try
applying your patch and see what is happening?
If you have more memory just trying increasing the ratio allocated to block
cache?

Regards
Ralm

On Thu, Dec 6, 2012 at 4:02 PM, Varun Sharma <[EMAIL PROTECTED]> wrote:

> Hi Ram,
>
> Yes BlockCache is on but there is another in memory column which might be
> preempting the stuff from block cache. So, we might be hitting more disk
> seeks - I see that you have seen this trace before on HBASE 5898 - did that
> issue resolve things for you ?
>
> Thanks
> Varun
>
> On Wed, Dec 5, 2012 at 10:04 PM, ramkrishna vasudevan <
> [EMAIL PROTECTED]> wrote:
>
> > Is block cache ON?  Check out HBASe-5898?
> >
> > Regards
> > Ram
> >
> > On Thu, Dec 6, 2012 at 9:55 AM, Anoop Sam John <[EMAIL PROTECTED]>
> wrote:
> >
> > >
> > > >is the META table cached just like other tables
> > > Yes Varun I think so.
> > >
> > > -Anoop-
> > > ________________________________________
> > > From: Varun Sharma [[EMAIL PROTECTED]]
> > > Sent: Thursday, December 06, 2012 6:10 AM
> > > To: [EMAIL PROTECTED]; lars hofhansl
> > > Subject: Re: .META. region server DDOSed by too many clients
> > >
> > > We only see this on the .META. region not otherwise...
> > >
> > > On Wed, Dec 5, 2012 at 4:37 PM, Varun Sharma <[EMAIL PROTECTED]>
> > wrote:
> > >
> > > > I see but is this pointing to the fact that we are heading to disk
> for
> > > > scanning META - if yes, that would be pretty bad, no ? Currently I am
> > > > trying to see if the freeze coincides with Block Cache being full (we
> > > have
> > > > an inmemory column) - is the META table cached just like other
> tables ?
> > > >
> > > > Varun
> > > >
> > > >
> > > > On Wed, Dec 5, 2012 at 4:20 PM, lars hofhansl <[EMAIL PROTECTED]>
> > > wrote:
> > > >
> > > >> Looks like you're running into HBASE-5898.
> > > >>
> > > >>
> > > >>
> > > >> ----- Original Message -----
> > > >> From: Varun Sharma <[EMAIL PROTECTED]>
> > > >> To: [EMAIL PROTECTED]
> > > >> Cc:
> > > >> Sent: Wednesday, December 5, 2012 3:51 PM
> > > >> Subject: .META. region server DDOSed by too many clients
> > > >>
> > > >> Hi,
> > > >>
> > > >> I am running hbase 0.94.0 and I have a significant write load being
> > put
> > > on
> > > >> a table with 98 regions on a 15 node cluster - also this write load
> > > comes
> > > >> from a very large number of clients (~ 1000). I am running with 10
> > > >> priority
> > > >> IPC handlers and 200 IPC handlers. It seems the region server
> holding
> > > >> .META
> > > >> is DDOSed. All the 200 handlers are busy serving the .META. region
> and
> > > >> they
> > > >> are all locked onto on object. The Jstack is here for the regoin
> > server
> > > >>
> > > >> "IPC Server handler 182 on 60020" daemon prio=10
> > tid=0x00007f329872c800
> > > >> nid=0x4401 waiting on condition [0x00007f328807f000]
> > > >>    java.lang.Thread.State: WAITING (parking)
> > > >>         at sun.misc.Unsafe.park(Native Method)
> > > >>         - parking to wait for  <0x0000000542d72e30> (a
> > > >> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> > > >>         at
> > > >> java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> > > >>         at
> > > >>
> > > >>
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:838)
> > > >>         at
> > > >>
> > > >>
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:871)
> > > >>         at
> > > >>
> > > >>
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1201)
> > > >>         at
> > > >>
> > > >>
> > >
> >
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
> > > >>         at
> > > >>
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
> > > >>         at
> > > >>
> > > >>
> > >
> >
> java.util.concurrent.ConcurrentHashMap$Segment.put(ConcurrentHashMap.java:445)