Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> GC overhead limit exceeded while running Terrior on Hadoop

Copy link to this message
Re: GC overhead limit exceeded while running Terrior on Hadoop

On Tue, Oct 26, 2010 at 8:14 PM, siddharth raghuvanshi
> Hi,
> While running Terrior on Hadoop, I am getting the following error again &
> again, can someone please point out where the problem is?
> attempt_201010252225_0001_m_000009_2: WARN - Error running child
> attempt_201010252225_0001_m_000009_2: java.lang.OutOfMemoryError: GC
> overhead limit exceeded

This error generally means that your MapReduce program requires more
JVM heap space than has been configured by default. You could refer to
the map/reduce documentation at http://bit.ly/9VAHCT and see if that
helps you. In short, you may have to set up some specific
configuration parameters for your map / reduce tasks to run with more
JVM heap space than the default. Depending on which version of Hadoop
you are using, the names could vary a little, but they should be
present in the relevant documentation.


> attempt_201010252225_0001_m_000009_2: at
> org.terrier.structures.indexing.singlepass.hadoop.HadoopRunWriter.writeTerm(HadoopRunWriter.java:78)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.structures.indexing.singlepass.MemoryPostings.writeToWriter(MemoryPostings.java:151)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.structures.indexing.singlepass.MemoryPostings.finish(MemoryPostings.java:112)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.forceFlush(Hadoop_BasicSinglePassIndexer.java:308)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.closeMap(Hadoop_BasicSinglePassIndexer.java:419)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.close(Hadoop_BasicSinglePassIndexer.java:236)
> attempt_201010252225_0001_m_000009_2: at
> org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
> attempt_201010252225_0001_m_000009_2: at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
> attempt_201010252225_0001_m_000009_2: at
> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2198)
> Thanks
> Regards
> Siddharth