Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - mslab enabled jvm crash


Copy link to this message
-
Re: mslab enabled jvm crash
Jack Levin 2011-06-06, 23:19
We have two production clusters, and we don't on either. We also have
days and days worth of no CMF reported.

Here is my config that works great for us:

export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC
-XX:MaxDirectMemorySize=2G"

# Uncomment below to enable java garbage collection logging.
export HBASE_OPTS="$HBASE_OPTS -verbose:gc -Xms12000m
-XX:CMSInitiatingOccupancyFraction=70 -XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:+HeapDumpOnOutOfMemoryError -Xloggc:$HBASE_HOME/logs/gc-hbase.log \
-XX:MaxTenuringThreshold=15 -XX:SurvivorRatio=8 \
-XX:+UseParNewGC \
-XX:NewSize=128m -XX:MaxNewSize=128m \
-XX:+CMSParallelRemarkEnabled \
-XX:-TraceClassUnloading
"

Also, just reduce the size of your RAM flushes to minimum, we are
running on 0.19 and 0.20 for lower to high memstore limits, so our
flushes are usually small enough not to cause major fragmentation
issues.

-Jack

On Mon, Jun 6, 2011 at 10:24 AM, Stack <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 6, 2011 at 10:06 AM, Wayne <[EMAIL PROTECTED]> wrote:
>> I had 25 sec CMF failure this morning...looks like bulk inserts are required
>> along with possibly weekly/daily scheduled rolling restarts. Do most
>> production clusters run rolling restarts on a regular basis to give the JVM
>> a fresh start?
>>
>
> We don't do it (maybe we should!).   Here is our bit of doc. on the
> decommission script: http://hbase.apache.org/book/decommission.html
> Its been working well for us; i.e. config. changes and upgrades while
> under load.
>
> St.Ack
>