If you are using the latest source from trunk, then the log file will
contain the exception stack trace. Hopefully, that will help you debug
the root cause. You don't have to instrument Pig sources to retrieve the
From: Mridul Muralidharan [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 13, 2009 12:30 PM
To: [EMAIL PROTECTED]
Subject: Re: hadoop-0.19.1
Unless pig is using deprecated api, move from 0.19 to 0.19.1 should
ideally not cause any problems imo.
Though the message below looks cryptic at first glance, the actual
reason might probably be the below from the exception tree.
"Cannot create exception from empty string.
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1002: Unable
to store alias 154"
For me, these messages do little to help debug the issue.
So, what I usually do is :
a) check out from svn the version used for generating pig.
b) find for this string in pig source code and print out the underlying
exception stack trace which is getting supressed by pig - like introduce
an exception.printStackTrace() where the error message is getting
c) ant clean; ant - to regenerate pig.jar and use that to run the job
d) In addition to above, you might also want to pass "-d DEBUG" in
commandline - to generate the substituted file.
All line/column numbers for errors/etc reported by pig are based on this
generated file and NOT w.r.t your input pig script.
Given this, usually there is enough info to debug.
The current error messages can be a bit unintutive to debug at times -
alias numbers (as opposed to the string), line numbers w.r.t
substituted files, cryptic schema errors, etc : when this happens, above
procedure is the way to go for me :-)
Vadim Zaliva wrote:
> I used this patch indeed and I have been working with it on 0.19.0.
> After upgrade to 0.19.1 things get broken.
> On Fri, Mar 13, 2009 at 08:56, Santhosh Srinivasan <[EMAIL PROTECTED]>
>> Did you use the patch submitted as part of PIG-573
>> -----Original Message-----
>> From: Vadim Zaliva [mailto:[EMAIL PROTECTED]]
>> Sent: Thursday, March 12, 2009 7:30 PM
>> To: [EMAIL PROTECTED]
>> Subject: hadoop-0.19.1
>> I have recently upgraded to Hadoop-0.19.1 (from 0.19.0). The pig task
>> which used to work previously
>> on it, does not work anymore, giving me cryptic error:
>> ERROR 2998: Unhandled internal error. depending job 0 with jobID
>> execute8 failed. depending job 0 with jobID execute7 failed.
>> job 2 with jobID execute6 failed. depending job 0 with jobID execute5
>> failed. Job failed!
>> java.lang.Exception: depending job 0 with jobID execute8 failed.
>> depending job 0 with jobID execute7 failed. depending job 2 with
>> execute6 failed. depending job 0 with jobID execute5 failed. Job
>> at org.apache.pig.PigServer.execute(PigServer.java:682)
>> at org.apache.pig.PigServer.registerQuery(PigServer.java:291)
>> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:82)
>> at org.apache.pig.Main.main(Main.java:354)