Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> Pig 0.11.1 OutOfMemory error


Copy link to this message
-
Re: Pig 0.11.1 OutOfMemory error
I think we should fix it in pig if it is a regression from pig 0.10.

Shubam,
   If the script works fine for you in pig 0.10, can you open a jira for
the issue with 0.11 ?

Regards,
Rohini
On Fri, Sep 6, 2013 at 1:51 PM, Bill Graham <[EMAIL PROTECTED]> wrote:

> The getSignature method basically generates a string representation of the
> logical plan and they computes it's hash. In your case it seems the logical
> plan is too large for the amount of memory you have. Try increasing the
> heap even more.
>
>
> On Fri, Sep 6, 2013 at 1:10 PM, Koji Noguchi <[EMAIL PROTECTED]>
> wrote:
>
> > Seems to be happening inside the method introduced in 0.11
> > "org.apache.pig.newplan.logical.relational.LogicalPlan.getSignature"
> >
> > https://issues.apache.org/jira/browse/PIG-2587
> >
> > Maybe a coincidence but can we ask Bill to help us?
> >
> > Shubham, can you try your query on pig 0.10.* and see if you don't hit
> the
> > OOM?
> >
> > Koji
> >
> >
> > On Sep 4, 2013, at 1:27 PM, Shubham Chopra wrote:
> >
> > > Hi,
> > >
> > > I have a relatively large pig scripts (around 1.5k lines, 85
> > assignments).
> > > Around 150 columns are getting projected, joined, grouped and
> aggregated
> > > ending in multiple stores.
> > >
> > > Pig 0.11.1 fails with the following error even before any jobs are
> fired:
> > > Pig Stack Trace
> > > ---------------
> > > ERROR 2998: Unhandled internal error. Java heap space
> > >
> > > java.lang.OutOfMemoryError: Java heap space
> > >        at java.util.Arrays.copyOf(Arrays.java:2882)
> > >        at
> > >
> >
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
> > >        at
> > > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
> > >        at java.lang.StringBuilder.append(StringBuilder.java:119)
> > >        at
> > >
> >
> org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirstLP(LogicalPlanPrinter.java:83)
> > >        at
> > >
> >
> org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.visit(LogicalPlanPrinter.java:69)
> > >        at
> > >
> >
> org.apache.pig.newplan.logical.relational.LogicalPlan.getSignature(LogicalPlan.java:122)
> > >        at org.apache.pig.PigServer.execute(PigServer.java:1237)
> > >        at org.apache.pig.PigServer.executeBatch(PigServer.java:333)
> > >        at
> > >
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:137)
> > >        at
> > >
> >
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
> > >        at
> > >
> >
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170)
> > >        at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> > >        at org.apache.pig.Main.run(Main.java:604)
> > >        at org.apache.pig.Main.main(Main.java:157)
> > >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >        at
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > >        at
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > >        at java.lang.reflect.Method.invoke(Method.java:597)
> > >        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> > >
> > > Increasing heap size to 2Gb doesn't help either. The only thing that
> > > appears to get the script working is to disable multi query
> optimization.
> > > Has anyone else faced a similar problem with Pig running out of memory
> > > while compiling the script? Any other way to get it to work besides
> > > disabling multi-query optimization?
> > >
> > > Thanks,
> > > Shubham.
> >
> >
>
>
> --
> *Note that I'm no longer using my Yahoo! email address. Please email me at
> [EMAIL PROTECTED] going forward.*
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB