Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> GC overhead limit exceeded


Copy link to this message
-
Re: GC overhead limit exceeded
A simple method like this : using special DirectEvent, it stores event body
from Source into direct bytebuffer with fixed body size. Thus, when this
event has been consumed by Sink, the reference of that direct block out of
heap might have chance to be reclaimed(only have chance).

This method may be limited by maximum direct memory, by default, this size
is equals with max heap size in HotSpot. Another focus is when that direct
bytebuffer storing event body can be reclaimed. While full GC is running,
it will check the direct memory usage and clear un-referenced direct block.

We can tuning the direct memory allocation with pre-allocated direct memory
blocks. This is more complex than method mentioned above.
Did I explain my thought clearly?

-Regards
Denny Ye

2012/10/11 Senthilvel Rangaswamy <[EMAIL PROTECTED]>

> Denny,
>
> How to do use direct memory ?
>
> Thanks,
> Senthil
>
>
> On Wed, Oct 10, 2012 at 7:25 PM, Denny Ye <[EMAIL PROTECTED]> wrote:
>
>> It might be caused by :
>> 1. Less heap memory. Increasing '-Xms -Xmx' options
>> 2. Disable that prompt. It's not recommended. Using
>> '-XX:-UseGCOverheadLimit' option
>> 3. There always have too many GC limitation with memory channel. I have
>> met some case with you before. I have tried to use direct memory for
>> storing event body out of heap in the past, that's effective method and no
>> any GC happened. I think you can test it by yourself.
>>
>> -Regards
>> Denny Ye
>>
>> 2012/10/11 Brock Noland <[EMAIL PROTECTED]>
>>
>>> Yep, sounds like: agent heap size < (capacity * avg event size)
>>>
>>> Brock
>>>
>>> On Wed, Oct 10, 2012 at 8:15 PM, Harish Mandala <[EMAIL PROTECTED]>
>>> wrote:
>>> > Hi,
>>> >
>>> > In flume-env.sh, please add
>>> >
>>> > JAVA_OPTS="-Xms128m -Xmx256m"
>>> >
>>> > (Or whatever amount of memory works for you. I have Xmx set to 4g)
>>> >
>>> > Regards,
>>> > Harish
>>> >
>>> >
>>> > On Wed, Oct 10, 2012 at 9:08 PM, Camp, Roy <[EMAIL PROTECTED]> wrote:
>>> >>
>>> >> I ran into the following error – I had to restart flume to recover.
>>>  Do I
>>> >> just need to adjust a GC setting of some sort or is there a larger
>>> issue
>>> >> here?
>>> >>
>>> >>
>>> >>
>>> >> Source: thriftLegacy
>>> >>
>>> >> Channel: memory (capacity: 1,000,000)
>>> >>
>>> >> Sink: avro
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> Exception in thread "pool-4-thread-2" java.lang.OutOfMemoryError: GC
>>> >> overhead limit exceeded
>>> >>
>>> >>         at
>>> >>
>>> java.util.concurrent.LinkedBlockingDeque.<init>(LinkedBlockingDeque.java:108)
>>> >>
>>> >>         at
>>> >>
>>> org.apache.flume.channel.MemoryChannel$MemoryTransaction.<init>(MemoryChannel.java:49)
>>> >>
>>> >>         at
>>> >>
>>> org.apache.flume.channel.MemoryChannel.createTransaction(MemoryChannel.java:264)
>>> >>
>>> >>         at
>>> >>
>>> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
>>> >>
>>> >>         at
>>> >>
>>> org.apache.flume.channel.ChannelProcessor.processEvent(ChannelProcessor.java:260)
>>> >>
>>> >>         at
>>> >>
>>> org.apache.flume.source.thriftLegacy.ThriftLegacySource$ThriftFlumeEventServerImpl.append(ThriftLegacySource.java:96)
>>> >>
>>> >>         at
>>> >>
>>> com.cloudera.flume.handlers.thrift.ThriftFlumeEventServer$Processor$append.process(ThriftFlumeEventServer.java:276)
>>> >>
>>> >>         at
>>> >>
>>> com.cloudera.flume.handlers.thrift.ThriftFlumeEventServer$Processor.process(ThriftFlumeEventServer.java:256)
>>> >>
>>> >>         at
>>> >>
>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:176)
>>> >>
>>> >>         at
>>> >>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>> >>
>>> >>         at
>>> >>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>> >>
>>> >>         at java.lang.Thread.run(Thread.java:679)
>>> >>
>>> >> Exception in thread
>>> "SinkRunner-PollingRunner-LoadBalancingSinkProcessor"
>>> >
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB