Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> Pig setup


sorry, answered my own question by looking at the trace :) .

What operation exactly are you trying to do? More info would help get to the
bottom of this.

Make sure that your pig.properties points correctly at your jobtracker and
namenode i.e.
you've need th fs.default.name and mapred.job.tracker properties to point to
the namenode and the jobtracker.

Cheers,
 Gerrit
On Thu, May 26, 2011 at 3:29 PM, Gerrit Jansen van Vuuren <
[EMAIL PROTECTED]> wrote:

> Mm... you shouldn't need to register any extra jars just for basic pig.
>
> This error comes after your query starts running on hadoop or before(i.e.
> front end)?
>
> Cheers,
>  Gerrit
>
> On Thu, May 26, 2011 at 3:08 PM, Jonathan Coveney <[EMAIL PROTECTED]>wrote:
>
>> I am pasting from a response ot another user...I think it may apply
>>
>> "Here is what I had to do to get pig running with a different version of
>> Hadoop (in my case, the cloudera build but I'd try this as well):
>>
>> build pig-withouthadoop.jar by running "ant jar-withouthadoop". Then, when
>> you run pig, put the pig-withouthadoop.jar on your classpath as well as
>> your
>> hadoop jar. In my case, I found that scripts only worked if I additionally
>> manually registered the antlr jar:
>>
>> register /path/to/pig/build/ivy/lib/Pig/antlr-runtime-3.2.jar;"
>>
>>
>>
>> 2011/5/25 Mohit Anchlia <[EMAIL PROTECTED]>
>>
>> > On Wed, May 25, 2011 at 5:02 PM, Mohit Anchlia <[EMAIL PROTECTED]>
>> > wrote:
>> > > I am in process of installing and learning pig. I have a hadoop
>> > > cluster and when I try to run pig in mapreduce mode it errors out:
>> >
>> > Hadoop version is hadoop-0.20.203.0 and pig version is pig-0.8.1
>> >
>> > >
>> > > Error before Pig is launched
>> > > ----------------------------
>> > > ERROR 2999: Unexpected internal error. Failed to create DataStorage
>> > >
>> > > java.lang.RuntimeException: Failed to create DataStorage
>> > >        at
>> >
>> org.apache.pig.backend.hadoop.datastorage.HDataStorage.init(HDataStorage.java:75)
>> > >        at
>> >
>> org.apache.pig.backend.hadoop.datastorage.HDataStorage.<init>(HDataStorage.java:58)
>> > >        at
>> >
>> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.init(HExecutionEngine.java:214)
>> > >        at
>> >
>> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.init(HExecutionEngine.java:134)
>> > >        at org.apache.pig.impl.PigContext.connect(PigContext.java:183)
>> > >        at org.apache.pig.PigServer.<init>(PigServer.java:226)
>> > >        at org.apache.pig.PigServer.<init>(PigServer.java:215)
>> > >        at org.apache.pig.tools.grunt.Grunt.<init>(Grunt.java:55)
>> > >        at org.apache.pig.Main.run(Main.java:452)
>> > >        at org.apache.pig.Main.main(Main.java:107)
>> > > Caused by: java.io.IOException: Call to dsdb1/172.18.60.96:54310
>> > > failed on local exception: java.io.EOFException
>> > >        at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>> > >        at org.apache.hadoop.ipc.Client.call(Client.java:743)
>> > >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>> > >        at $Proxy0.getProtocolVersion(Unknown Source)
>> > >        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>> > >        at
>> > org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>> > >        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>> > >        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>> > >        at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>> > >        at
>> > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>> > >        at
>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> > >        at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>> > >        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>> > >        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)