Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - java.lang.OutOfMemoryError: GC overhead limit exceeded


Copy link to this message
-
Re: java.lang.OutOfMemoryError: GC overhead limit exceeded
Ted Yu 2010-09-26, 21:35
-1 means there is no limit to reusing.
At the same time, you can generate heap dump from OOME and analyze with
YourKit, etc.

Cheers

On Sun, Sep 26, 2010 at 1:19 PM, Bradford Stephens <
[EMAIL PROTECTED]> wrote:

> Hrm.... no. I've lowered it to -1, but I can try 1 again.
>
> On Sun, Sep 26, 2010 at 6:47 AM, Ted Yu <[EMAIL PROTECTED]> wrote:
> > Have you tried lowering mapred.job.reuse.jvm.num.tasks ?
> >
> > On Sun, Sep 26, 2010 at 3:30 AM, Bradford Stephens <
> > [EMAIL PROTECTED]> wrote:
> >
> >> Nope, that didn't seem to help.
> >>
> >> On Sun, Sep 26, 2010 at 1:00 AM, Bradford Stephens
> >> <[EMAIL PROTECTED]> wrote:
> >> > I'm going to try running it on high-RAM boxes with -Xmx4096m or so,
> >> > see if that helps.
> >> >
> >> > On Sun, Sep 26, 2010 at 12:55 AM, Bradford Stephens
> >> > <[EMAIL PROTECTED]> wrote:
> >> >> Greetings,
> >> >>
> >> >> I'm running into a brain-numbing problem on Elastic MapReduce. I'm
> >> >> running a decent-size task (22,000 mappers, a ton of GZipped input
> >> >> blocks, ~1TB of data) on 40 c1.xlarge nodes (7 gb RAM, ~8 "cores").
> >> >>
> >> >> I get failures randomly --- sometimes at the end of my 6-step
> process,
> >> >> sometimes at the first reducer phase, sometimes in the mapper. It
> >> >> seems to fail in multiple areas. Mostly in the reducers. Any ideas?
> >> >>
> >> >> Here's the settings I've changed:
> >> >> -Xmx400m
> >> >> 6 max mappers
> >> >> 1 max reducer
> >> >> 1GB swap partition
> >> >> mapred.job.reuse.jvm.num.tasks=50
> >> >> mapred.reduce.parallel.copies=3
> >> >>
> >> >>
> >> >> java.lang.OutOfMemoryError: GC overhead limit exceeded
> >> >>        at java.nio.CharBuffer.wrap(CharBuffer.java:350)
> >> >>        at java.nio.CharBuffer.wrap(CharBuffer.java:373)
> >> >>        at
> >> java.lang.StringCoding$StringDecoder.decode(StringCoding.java:138)
> >> >>        at java.lang.StringCoding.decode(StringCoding.java:173)
> >> >>        at java.lang.String.(String.java:443)
> >> >>        at java.lang.String.(String.java:515)
> >> >>        at
> >> org.apache.hadoop.io.WritableUtils.readString(WritableUtils.java:116)
> >> >>        at
> >> cascading.tuple.TupleInputStream.readString(TupleInputStream.java:144)
> >> >>        at
> >> cascading.tuple.TupleInputStream.readType(TupleInputStream.java:154)
> >> >>        at
> >>
> cascading.tuple.TupleInputStream.getNextElement(TupleInputStream.java:101)
> >> >>        at
> >>
> cascading.tuple.hadoop.TupleElementComparator.compare(TupleElementComparator.java:75)
> >> >>        at
> >>
> cascading.tuple.hadoop.TupleElementComparator.compare(TupleElementComparator.java:33)
> >> >>        at
> >>
> cascading.tuple.hadoop.DelegatingTupleElementComparator.compare(DelegatingTupleElementComparator.java:74)
> >> >>        at
> >>
> cascading.tuple.hadoop.DelegatingTupleElementComparator.compare(DelegatingTupleElementComparator.java:34)
> >> >>        at
> >>
> cascading.tuple.hadoop.DeserializerComparator.compareTuples(DeserializerComparator.java:142)
> >> >>        at
> >>
> cascading.tuple.hadoop.GroupingSortingComparator.compare(GroupingSortingComparator.java:55)
> >> >>        at
> >> org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373)
> >> >>        at
> >> org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:136)
> >> >>        at
> >> org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
> >> >>        at
> >>
> org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:335)
> >> >>        at
> >> org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:350)
> >> >>        at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:156)
> >> >>        at
> >>
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2645)
> >> >>        at
> >>
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2586)
> >> >>
> >> >> --
> >> >> Bradford Stephens,
> >> >> Founder, Drawn to Scale