Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Java Heap space error


Copy link to this message
-
Re: Java Heap space error
Mohit Anchlia 2012-03-06, 18:10
I am still trying to see how to narrow this down. Is it possible to set
heapdumponoutofmemoryerror option on these individual tasks?

On Mon, Mar 5, 2012 at 5:49 PM, Mohit Anchlia <[EMAIL PROTECTED]>wrote:

> Sorry for multiple emails. I did find:
>
>
> 2012-03-05 17:26:35,636 INFO
> org.apache.pig.impl.util.SpillableMemoryManager: first memory handler call-
> Usage threshold init = 715849728(699072K) used = 575921696(562423K)
> committed = 715849728(699072K) max = 715849728(699072K)
>
> 2012-03-05 17:26:35,719 INFO
> org.apache.pig.impl.util.SpillableMemoryManager: Spilled an estimate of
> 7816154 bytes from 1 objects. init = 715849728(699072K) used > 575921696(562423K) committed = 715849728(699072K) max = 715849728(699072K)
>
> 2012-03-05 17:26:36,881 INFO
> org.apache.pig.impl.util.SpillableMemoryManager: first memory handler call
> - Collection threshold init = 715849728(699072K) used = 358720384(350312K)
> committed = 715849728(699072K) max = 715849728(699072K)
>
> 2012-03-05 17:26:36,885 INFO org.apache.hadoop.mapred.TaskLogsTruncater:
> Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
>
> 2012-03-05 17:26:36,888 FATAL org.apache.hadoop.mapred.Child: Error
> running child : java.lang.OutOfMemoryError: Java heap space
>
> at java.nio.HeapCharBuffer.<init>(HeapCharBuffer.java:39)
>
> at java.nio.CharBuffer.allocate(CharBuffer.java:312)
>
> at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:760)
>
> at org.apache.hadoop.io.Text.decode(Text.java:350)
>
> at org.apache.hadoop.io.Text.decode(Text.java:327)
>
> at org.apache.hadoop.io.Text.toString(Text.java:254)
>
> at
> org.apache.pig.piggybank.storage.SequenceFileLoader.translateWritableToPigDataType(SequenceFileLoader.java:105)
>
> at
> org.apache.pig.piggybank.storage.SequenceFileLoader.getNext(SequenceFileLoader.java:139)
>
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:187)
>
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
>
> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>
> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:396)
>
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
>
> at org.apache.hadoop.mapred.Child.main(Child.java:264)
>
>
>   On Mon, Mar 5, 2012 at 5:46 PM, Mohit Anchlia <[EMAIL PROTECTED]>wrote:
>
>> All I see in the logs is:
>>
>>
>> 2012-03-05 17:26:36,889 FATAL org.apache.hadoop.mapred.TaskTracker: Task:
>> attempt_201203051722_0001_m_000030_1 - Killed : Java heap space
>>
>> Looks like task tracker is killing the tasks. Not sure why. I increased
>> heap from 512 to 1G and still it fails.
>>
>>
>> On Mon, Mar 5, 2012 at 5:03 PM, Mohit Anchlia <[EMAIL PROTECTED]>wrote:
>>
>>> I currently have java.opts.mapred set to 512MB and I am getting heap
>>> space errors. How should I go about debugging heap space issues?
>>>
>>
>>
>