Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Sizing walog area


Copy link to this message
-
Re: Sizing walog area
Yes, tserver.memory.maps.max was what I was thinking, and the number
of loggers is controlled by tserver.logger.count.

The logger.recovery.file.replication I believe is used in 1.5 and
later, for replication in HDFS, not the local file system usage in 1.4
from your original inquiry.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii
On Thu, Oct 24, 2013 at 12:58 PM, Terry P. <[EMAIL PROTECTED]> wrote:
> Hi Christopher,
> Just to ensure I'm looking at the correct property, by "in-memory map size
> per tserver" are you referring to the property tserver.memory.maps.max in
> accumulo-site.xml?  If that's the case, I'm using 1GB for that property.
>
> I am running loggers on each tserver. Is the default that Accumulo writes to
> at least two loggers?  I see config item logger.recovery.file.replication is
> set to 2 (by default); is that what controls this?
>
> Digging around, I also see that logger.archive.replication is set to 2, and
> logger.archive is false.  What does logger.archive and
> logger.archive.replication do?  I find no mention of "archive" in the User
> Manual.
>
> Many thanks Christopher, your help is always appreciated.
>
>
> On Wed, Oct 23, 2013 at 2:10 PM, Christopher <[EMAIL PROTECTED]> wrote:
>>
>> The data in the write-ahead logs is needed until the tserver flushes
>> the in memory maps to disk. Assuming you have a logger running on
>> every tserver, and tservers write to at least two loggers, you should
>> ensure that the size of the disk area is *at least* two times as big
>> as your in-memory map size per tserver. I'd say 5x-10x the in-memory
>> map size is probably safe. So, if your tservers are running with 2GB
>> of memory, then a 10-20GB area is probably more than sufficient.
>>
>> --
>> Christopher L Tubbs II
>> http://gravatar.com/ctubbsii
>>
>>
>> On Wed, Oct 23, 2013 at 1:02 PM, Terry P. <[EMAIL PROTECTED]> wrote:
>> > Greetings all,
>> > For Accumulo 1.4 where write ahead logs are not yet stored in HDFS, does
>> > anyone have guidancewith respect to sizing the walog area?  What exactly
>> > triggers when write ahead logs get removed?  What might cause them to
>> > hang
>> > around for an extended period of time (as in under abnormal
>> > circumstances)?
>> >
>> > The system this applies to will see an ingest rate of approximately 2000
>> > docs per second averaging 1-2K each (broken out into 12 columns each, so
>> > 24,000 entries per second) across 6 tabletserver nodes.
>> >
>> > Thanks in advance,
>> > Terry
>
>