Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> GC overhead limit exceeded


+
Camp, Roy 2012-10-11, 01:08
+
Harish Mandala 2012-10-11, 01:15
+
Brock Noland 2012-10-11, 01:17
+
Denny Ye 2012-10-11, 02:25
+
Senthilvel Rangaswamy 2012-10-11, 02:27
+
Denny Ye 2012-10-11, 02:40
+
Brock Noland 2012-10-11, 11:38
Copy link to this message
-
Re: GC overhead limit exceeded
It's always usable with controlled 'close' method. Memory out of heap is
more and more important in big-data era. Thus, Apache DirectMemory is kind
of perfect solution to manage direct memory with user defined ways.

I will test the performance and stress both of creation and reclaim for
direct memory, thanks Brock for your tips

-Denny Ye

2012/10/11 Brock Noland <[EMAIL PROTECTED]>

> The only issue with using Direct Memory is that it is not reclaimed
> until the object is garbage collected. So it's possible to run out of
> Direct Memory without actually even garbage collecting. DirectMemory
> buffers have an undocumented cleaner object which can be called when
> the object is no longer used to avoid running out of space.
>
> Example:
> https://github.com/apache/flume/blob/trunk/flume-ng-core/src/main/java/org/apache/flume/tools/DirectMemoryUtils.java#L62
>
> Brock
>
> On Wed, Oct 10, 2012 at 9:40 PM, Denny Ye <[EMAIL PROTECTED]> wrote:
> > A simple method like this : using special DirectEvent, it stores event
> body
> > from Source into direct bytebuffer with fixed body size. Thus, when this
> > event has been consumed by Sink, the reference of that direct block out
> of
> > heap might have chance to be reclaimed(only have chance).
> >
> > This method may be limited by maximum direct memory, by default, this
> size
> > is equals with max heap size in HotSpot. Another focus is when that
> direct
> > bytebuffer storing event body can be reclaimed. While full GC is
> running, it
> > will check the direct memory usage and clear un-referenced direct block.
> >
> > We can tuning the direct memory allocation with pre-allocated direct
> memory
> > blocks. This is more complex than method mentioned above.
> > Did I explain my thought clearly?
> >
> > -Regards
> > Denny Ye
> >
> > 2012/10/11 Senthilvel Rangaswamy <[EMAIL PROTECTED]>
> >>
> >> Denny,
> >>
> >> How to do use direct memory ?
> >>
> >> Thanks,
> >> Senthil
> >>
> >>
> >> On Wed, Oct 10, 2012 at 7:25 PM, Denny Ye <[EMAIL PROTECTED]> wrote:
> >>>
> >>> It might be caused by :
> >>> 1. Less heap memory. Increasing '-Xms -Xmx' options
> >>> 2. Disable that prompt. It's not recommended. Using
> >>> '-XX:-UseGCOverheadLimit' option
> >>> 3. There always have too many GC limitation with memory channel. I have
> >>> met some case with you before. I have tried to use direct memory for
> storing
> >>> event body out of heap in the past, that's effective method and no any
> GC
> >>> happened. I think you can test it by yourself.
> >>>
> >>> -Regards
> >>> Denny Ye
> >>>
> >>> 2012/10/11 Brock Noland <[EMAIL PROTECTED]>
> >>>>
> >>>> Yep, sounds like: agent heap size < (capacity * avg event size)
> >>>>
> >>>> Brock
> >>>>
> >>>> On Wed, Oct 10, 2012 at 8:15 PM, Harish Mandala
> >>>> <[EMAIL PROTECTED]> wrote:
> >>>> > Hi,
> >>>> >
> >>>> > In flume-env.sh, please add
> >>>> >
> >>>> > JAVA_OPTS="-Xms128m -Xmx256m"
> >>>> >
> >>>> > (Or whatever amount of memory works for you. I have Xmx set to 4g)
> >>>> >
> >>>> > Regards,
> >>>> > Harish
> >>>> >
> >>>> >
> >>>> > On Wed, Oct 10, 2012 at 9:08 PM, Camp, Roy <[EMAIL PROTECTED]> wrote:
> >>>> >>
> >>>> >> I ran into the following error – I had to restart flume to recover.
> >>>> >> Do I
> >>>> >> just need to adjust a GC setting of some sort or is there a larger
> >>>> >> issue
> >>>> >> here?
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >> Source: thriftLegacy
> >>>> >>
> >>>> >> Channel: memory (capacity: 1,000,000)
> >>>> >>
> >>>> >> Sink: avro
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >> Exception in thread "pool-4-thread-2" java.lang.OutOfMemoryError:
> GC
> >>>> >> overhead limit exceeded
> >>>> >>
> >>>> >>         at
> >>>> >>
> >>>> >>
> java.util.concurrent.LinkedBlockingDeque.<init>(LinkedBlockingDeque.java:108)
> >>>> >>
> >>>> >>         at
> >>>> >>
> >>>> >>
> org.apache.flume.channel.MemoryChannel$MemoryTransaction.<init>(MemoryChannel.java:49)
> >>>> >>
> >>>> >>         at
> >>>> >>
> >>>> >>
> org.apache.flume.channel.MemoryChannel.createTransaction(MemoryChannel.java:264)