Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> Pig out of memory error


Copy link to this message
-
Re: Pig out of memory error
export HADOOP_HEAPSIZE=<something more than what it is right now>

Thanks,
Aniket

On Sun, Jun 17, 2012 at 11:16 PM, Pankaj Gupta <[EMAIL PROTECTED]>wrote:

> Hi,
>
> I am getting an out of memory error while running Pig. I am running a
> pretty big job with one master node and over 100 worker nodes. Pig divides
> the execution in two map-reduce jobs. Both the jobs succeed with a small
> data set. With a large data set I get an out of memory error at the end of
> the first job. This happens right after the all the mappers and reducers of
> the first job are done and the second job hasn't started. Here is the error:
>
> 2012-06-18 03:15:29,565 [Low Memory Detector] INFO
>  org.apache.pig.impl.util.SpillableMemoryManager - first memory handler
> call - Collection threshold init = 187039744(182656K) used > 390873656(381712K) committed = 613744640(599360K) max = 699072512(682688K)
> 2012-06-18 03:15:31,137 [Low Memory Detector] INFO
>  org.apache.pig.impl.util.SpillableMemoryManager - first memory handler
> call- Usage threshold init = 187039744(182656K) used = 510001720(498048K)
> committed = 613744640(599360K) max = 699072512(682688K)
> Exception in thread "IPC Client (47) connection to /10.217.23.253:9001from hadoop" java.lang.RuntimeException:
> java.lang.reflect.InvocationTargetException
> Caused by: java.lang.reflect.InvocationTargetException
> Caused by: java.lang.OutOfMemoryError: Java heap space
>        at org.apache.hadoop.mapred.TaskReport.<init>(TaskReport.java:46)
>        at sun.reflect.GeneratedConstructorAccessor31.newInstance(Unknown
> Source)
>        at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>        at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:113)
>        at
> org.apache.hadoop.io.WritableFactories.newInstance(WritableFactories.java:53)
>        at
> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:236)
>        at
> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:171)
>        at
> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:219)
>        at
> org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
>        at
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:807)
>        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:742)
> Exception in thread "Low Memory Detector" java.lang.OutOfMemoryError: Java
> heap space
>        at
> sun.management.MemoryUsageCompositeData.getCompositeData(MemoryUsageCompositeData.java:40)
>        at
> sun.management.MemoryUsageCompositeData.toCompositeData(MemoryUsageCompositeData.java:34)
>        at
> sun.management.MemoryNotifInfoCompositeData.getCompositeData(MemoryNotifInfoCompositeData.java:42)
>        at
> sun.management.MemoryNotifInfoCompositeData.toCompositeData(MemoryNotifInfoCompositeData.java:36)
>        at sun.management.MemoryImpl.createNotification(MemoryImpl.java:168)
>        at
> sun.management.MemoryPoolImpl$CollectionSensor.triggerAction(MemoryPoolImpl.java:300)
>        at sun.management.Sensor.trigger(Sensor.java:120)
>
> I will really appreciate and suggestions on how to go about debugging and
> rectifying this issue.
>
> Thanks,
> Pankaj
--
"...:::Aniket:::... Quetzalco@tl"
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB