Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Tabletserver message "Running low on memory"

Copy link to this message
Re: Tabletserver message "Running low on memory"
I have GC logging enabled and garbage collection times are very, very low
so I think I'm in good shape to increase the JVM heap.  Great to know about
those 'gc ParNew' entries in the debug log though as they pretty much make
my gc logs redundant (but I'll keep them there for now anyways just in

Many thanks again.
On Tue, Nov 12, 2013 at 3:35 PM, Josh Elser <[EMAIL PROTECTED]> wrote:

> On 11/12/13, 1:25 PM, Terry P. wrote:
>> Hi Josh,
>> Thanks for your exhaustive reply. I am using Native maps, and it's set
>> to 1G in my accumulo-site.xml.  The data and index cache settings there
>> are still at their 3G default values as well (50M and 512M).  I
>> definitely didn't realize that and will increase their size given I have
>> plenty of memory sitting around idle (it was intended to be used for
>> caching too!).
>> Will increasing the tserver.memory.maps.max in accumulo-site.xml perhaps
>> help reduce these warning messages?  My only concern is that an operator
>> may be monitoring the Accumulo Monitor GUI and see the memory warnings
>> and think "Oh no, we're almost out of memory, I should page someone!"
> Hahaha, yeah, I know what you mean. You can tell them that it's just a
> "warning" and not an "error" :P. Increasing the size of the memory maps
> won't make the error go away. I believe that warning is purely over JVM
> heap. I don't believe there's any code (outside of the flush-policy for
> when to close a native map and start a new one to make sure the
> tserver.memory.maps.max is observed) to monitor the size of the native maps.
> You would want to increase JVM heap size to keep that error from happening
> or reduce the amounts of heap you give to the index block or data block
> cache.
>  Based on what you've seen, is the warning innocuous and can just be
>> ignored?
> IMO, yes. Given with a strong recommendation that you know that you're not
> spending any significant time in garbage collection.
> `fgrep 'gc ParNew'` on your tserver.debug.log and not seeing "spiky" gc
> cycles.
>> On Tue, Nov 12, 2013 at 3:03 PM, Josh Elser <[EMAIL PROTECTED]
>> <mailto:[EMAIL PROTECTED]>> wrote:
>>     IMO, I see this at home on my computer no matter what memory
>>     settings I use. I've become pretty accustomed to flat out ignoring
>> it...
>>     As for heap management, there are two big paths here: with "native
>>     maps" and without. When you write data to Accumulo, it goes to two
>>     places: 1) Write-ahead log and 2) Memory maps. The WAL ensures that
>>     if you have writes in memory on a server that dies, that you don't
>>     lose data. The memory maps give you much faster ingest over trying
>>     to write into a sorted file.
>>     1) Native maps (aka c++ code over JNI)
>>     This memory allocation, controlled by tserver.memory.maps.max in
>>     accumulo-site.xml, is "off heap" memory. It is not limited by the
>>     JVM heap limits you specify in ACCUMULO_TSERVER_OPTS in
>>     accumulo-env.sh. As such, you need to make sure that you don't
>>     over-allocate memory usage on your node (tserver.memory.maps.max +
>>     JVM Xmx + fudge-factor < total available memory).
>>     2) Non-native (in JVM)
>>     This serves the same purpose as #1 but is in JVM heap as opposed to
>>     off heap. Ingest will be slower and JVM gc will likely be a bigger
>>     issue than using the native maps. This does make the JVM sizing a
>>     little more straightforward: JVM Xmx + fudge-factor < total
>>     available memory (but math is pretty easy).
>>     Assuming you use the native maps, lets break down what you see in
>>     JVM heap.
>>     1) Index block cache
>>     Each RFile (backing file for tablets in Accumulo), has an
>>     multi-level index structure which lets you efficiently find the data
>>     in that file. Accumulo provides the ability to cache this index
>>     information instead of reading and deserializing from disk every
>>     time. Controlled by tserver.cache.index.size.