Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - GC recommendations for large Region Server heaps


Copy link to this message
-
Re: GC recommendations for large Region Server heaps
Suraj Varma 2013-07-10, 00:05
Hi Azuryy:
Thanks so much for sharing. This gives me a good list of tuning options to
read more on while constructing our GC_OPTS.

Follow up question: Was your cluster tuned to handle read heavy loads or
was it mixed / read-write loads? Just trying to understand what your
constraints were.
--Suraj
On Mon, Jul 8, 2013 at 10:52 PM, Azuryy Yu <[EMAIL PROTECTED]> wrote:

> This is my HBASE GC options of CMS, it does work well.
>
> XX:+DisableExplicitGC -XX:+UseCompressedOops -XX:PermSize=160m
> -XX:MaxPermSize=160m -XX:GCTimeRatio=19 -XX:SoftRefLRUPolicyMSPerMB=0
> -XX:SurvivorRatio=2 -XX:MaxTenuringThreshold=1 -XX:+UseFastAccessorMethods
> -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSCompactAtFullCollection
> -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled
> -XX:CMSMaxAbortablePrecleanTime=300 -XX:+CMSScavengeBeforeRemark
>
>
>
> On Tue, Jul 9, 2013 at 1:12 PM, Otis Gospodnetic <
> [EMAIL PROTECTED]
> > wrote:
>
> > Hi,
> >
> > Check http://blog.sematext.com/2013/06/24/g1-cms-java-garbage-collector/
> >
> > Those graphs show RegionServer before and after switch to G1.  The
> > dashboard screenshot further below shows CMS (top row) vs. G1 (bottom
> > row).  After those tests we ended up switching to G1 across the whole
> > cluster and haven't had issues or major pauses since.... knock on
> > keyboard.
> >
> > Otis
> > --
> > Solr & ElasticSearch Support -- http://sematext.com/
> > Performance Monitoring -- http://sematext.com/spm
> >
> >
> >
> > On Mon, Jul 8, 2013 at 2:56 PM, Stack <[EMAIL PROTECTED]> wrote:
> > > On Mon, Jul 8, 2013 at 11:09 AM, Suraj Varma <[EMAIL PROTECTED]>
> > wrote:
> > >
> > >> Hello:
> > >> We have an HBase cluster with region servers running on 8GB heap size
> > with
> > >> a 0.6 block cache (it is a read heavy cluster, with bursty write
> traffic
> > >> via MR jobs). (version: hbase-0.94.6.1)
> > >>
> > >> During HBaseCon, while speaking to a few attendees, I heard some folks
> > were
> > >> running region servers as high as 24GB and some others in the 16GB
> > range.
> > >>
> > >> So - question: Are there any special GC recommendations (tuning
> > parameters,
> > >> flags, etc) that folks who run at these large heaps can recommend
> while
> > >> moving up from an 8GB heap? i.e. for 16GB and for 24GB RS heaps ... ?
> > >>
> > >> I'm especially concerned about long pauses causing zk session timeouts
> > and
> > >> consequent RS shutdowns. Our boxes do have a lot of RAM and we are
> > >> exploring how we can use more of it for the cluster while maintaining
> > >> overall stability.
> > >>
> > >> Also - if there are clusters running multiple region servers per host,
> > I'd
> > >> be very interested to know what RS heap sizes those are being run at
> ...
> > >> and whether this was chosen as an alternative to running a single RS
> > with
> > >> large heap.
> > >>
> > >> (I know I'll have to test the GC stuff out on my cluster and for my
> > >> workloads anyway ... but just trying to get a feel of what sort of
> > tuning
> > >> options had to be used to have a stable HBase cluster with 16 or 24GB
> RS
> > >> heaps).
> > >>
> > >
> > >
> > > You hit full GC in this 8G heap Suraj?  Can you try running one server
> at
> > > 24G to see how it does (with GC logging enabled so you can watch it
> over
> > > time)?  On one hand, more heap may make it so you avoid full GC -- if
> you
> > > are hitting them now at 8G -- because application has more head room.
>  On
> > > other hand, yes, if a full GC hits, it will be gone for proportionally
> > > longer than for your 8G heap.
> > >
> > > St.Ack
> >
>