Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # user - accumulo-env.sh and accumulo-site.xml recommendations for Tablet Servers


Copy link to this message
-
accumulo-env.sh and accumulo-site.xml recommendations for Tablet Servers
Jason Trost 2012-08-25, 20:54
I am running Accumulo on some boxes with 32GB RAM (machine functions
as DataNode, TaskTracker, and TabletServer).  I have them working and
configured based on this, but I was curious if there are any rules of
thumb for configuring the memory constrained settings for Accumulo
Tablet servers.

If I wanted to devote 16GB to the various accumulo services running on
these boxes, what are the recommended RAM constrained settings for the
following configs?  (Please assume accumulo is being used for Keys
less than 100 bytes and Values less than 700 bytes).

>From accumulo-env.sh (taken from the 3GB example):

test -z "$ACCUMULO_TSERVER_OPTS" && export
ACCUMULO_TSERVER_OPTS="${POLICY} -Xmx1g -Xms1g -Xss160k"
test -z "$ACCUMULO_MASTER_OPTS"  && export
ACCUMULO_MASTER_OPTS="${POLICY} -Xmx1g -Xms1g"
test -z "$ACCUMULO_MONITOR_OPTS" && export
ACCUMULO_MONITOR_OPTS="${POLICY} -Xmx1g -Xms256m"
test -z "$ACCUMULO_GC_OPTS"      && export ACCUMULO_GC_OPTS="-Xmx256m -Xms256m"
test -z "$ACCUMULO_LOGGER_OPTS"  && export
ACCUMULO_LOGGER_OPTS="-Xmx1g -Xms256m"
test -z "$ACCUMULO_GENERAL_OPTS" && export
ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75"
test -z "$ACCUMULO_OTHER_OPTS"   && export ACCUMULO_OTHER_OPTS="-Xmx1g -Xms256m"

And from accumulo-site.xml (taken from the 3GB example):

    <property>
      <name>tserver.memory.maps.max</name>
      <value>1G</value>
    </property>

    <property>
      <name>tserver.cache.data.size</name>
      <value>50M</value>
    </property>

    <property>
      <name>tserver.cache.index.size</name>
      <value>100M</value>
    </property>

Are there any others that should be changed from accumulo-site.xml?

Thanks,

--Jason