Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # user >> Tuning & Compactions

Copy link to this message
Tuning & Compactions

I am trialling Accumulo on a small (tiny) cluster and wondering how the
best way to tune it would be. I have 1 master + 2 tservers. The master has
8Gb of RAM and the tservers have each 16Gb each.

I have set the walogs size to be 2Gb with an external memory map of 9G. The
ratio is still the defaulted to 3. I've also upped the heap sizes of each
tserver to 2Gb heaps.

I'm trying to achieve high-speed ingest via batch writers held on several
other servers. I'm loading two separate tables.

Here are some questions I have:
- Does the config above sound sensible? or overkill?
- Is it preferable to have more servers with lower specs?
- Is this the best way to maximise use of the memory?
- Does the fact I have 3x2Gb walogs, means that the remaining 3Gb in the
external memory map can be used while compactions occur?
- When minor compactions occur, does this halt ingest on that particular
tablet? or tablet server?
- I have pre-split the tables six-ways, but not entirely sure if that's
preferable if I only have 2 servers while trying it out? Perhaps 2 ways
might be better?
- Does the batch upload through the shell client give significantly better
performance stats?

I realise some of those questions may be hard to quantify,, but any
guidance or help in understanding how to better tune Accumulo would be
greatly appreciated!

Eric Newton 2012-11-28, 20:31
Chris Burrell 2012-12-04, 18:06