First, you want the RegionServer to use the available memory for caching, etc. Every byte of unused RAM is wasted.

I would make the heap slightly smaller than 32GB, so that the JVM can still use compressed OOPs.
So I'd set to 31GB.
Lastly, 800 writes/s still a bit low. How does the CPU usage look across the RegionServers?
If CPU is high, you might want to make the memstores *smaller* (it is expensive to read/write from/to a SkipList).
If you see bad IO, and many store files (as might be case following the discussion below) maybe you want to increase the memstores.
 From: Rohit Dev <[EMAIL PROTECTED]>
Sent: Sunday, January 26, 2014 3:35 AM
Subject: Re: Hbase tuning for heavy write cluster

Hi Vladimir,

Here is my cluster status:

Cluster Size: 26
Server memory: 128GB
Total Writes per sec (data): 450 Mbps
Writes per sec (count) per server: avg ~800 writes/sec (some spikes
upto 3000 writes/sec)
Max Region Size: 16GB
Regions per server: ~140 (not sure if I would be able to merge some
empty regions while table is online)
We are running CDH 4.3

Recently I changed setttings to:
Java heap size for Region Server: 32GB
hbase.hregion.memstore.flush.size: 536870912
hbase.hstore.blockingStoreFiles: 30
hbase.hstore.compaction.max: 15
hbase.hregion.memstore.block.multiplier: 3
hbase.regionserver.maxlogs: 90 (it is too high for 512MB memstore flush size ?)

I'm seeing weird stuff, like one region has grown upto 34GB! and has
21 store files. MAX_FILESIZE for this table is only 16GB.
Could this be a problem ?

On Sat, Jan 25, 2014 at 9:49 PM, Vladimir Rodionov
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB