I recently setup Accumulo 1.4.2 on a rack of boxes that each has 24 processors and 43 GB of RAM.  I set them up using the 3GB example templates but then increased the max size of the Tserver and a few other components to 5GB.  

Doing some initial tests importing roughly 7000 records, each record has approximately 7 small fields and 1 large field holding data between 200Kb to 1Mb in size.  While ingesting I am seeing the server hold and start minor compactions which seem to take quite a while after 2000-3000 records, and then occurring again fairly frequently

I am wondering what options I have to try and minimize the frequency of minor compactions during ingest.    What components memory sizes and config properties would help me avoid this problem?  If anyone has other ideas for me to try and fix this please let me know.

Thanks in advance,

Sandy
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB