Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Sizing walog area


Copy link to this message
-
Re: Sizing walog area
The data in the write-ahead logs is needed until the tserver flushes
the in memory maps to disk. Assuming you have a logger running on
every tserver, and tservers write to at least two loggers, you should
ensure that the size of the disk area is *at least* two times as big
as your in-memory map size per tserver. I'd say 5x-10x the in-memory
map size is probably safe. So, if your tservers are running with 2GB
of memory, then a 10-20GB area is probably more than sufficient.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii
On Wed, Oct 23, 2013 at 1:02 PM, Terry P. <[EMAIL PROTECTED]> wrote:
> Greetings all,
> For Accumulo 1.4 where write ahead logs are not yet stored in HDFS, does
> anyone have guidancewith respect to sizing the walog area?  What exactly
> triggers when write ahead logs get removed?  What might cause them to hang
> around for an extended period of time (as in under abnormal circumstances)?
>
> The system this applies to will see an ingest rate of approximately 2000
> docs per second averaging 1-2K each (broken out into 12 columns each, so
> 24,000 entries per second) across 6 tabletserver nodes.
>
> Thanks in advance,
> Terry
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB