Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> Support for Hadoop 2.2


Copy link to this message
-
Re: Support for Hadoop 2.2
Hi Claudio,
it's hard to guess from the limited information. I would suggest to take a look into logs to see what is happening.

One guess though - You've mentioned that the task was "running" for 30 minutes, but it still seems to be in SCHEDULED time - are your node managers correctly running?

Jarcec

On Fri, Oct 25, 2013 at 04:10:12PM -0300, Claudio Romo Otto wrote:
> You got it!
>
> The solution was to compile with  -Dhadoopversion=23 option. After
> your message I tried another test removing Cassandra from the chain
> and Pig sent successfully the job to hadoop.
>
> BUT! the problem changed, now the Map task remains forever stuck on
> Hadoop (30 minutes waiting, no other jobs running):
>
> Task
>
> Progress
>
> State
>
> Start Time
>
> Finish Time
>
> Elapsed Time
> task_1382631533263_0012_m_000000 <http://topgps-test-3.dnsalias.com:8088/proxy/application_1382631533263_0012/mapreduce/task/task_1382631533263_0012_m_000000>
>
> SCHEDULED Fri, 25 Oct 2013 18:18:32 GMT N/A 0sec
>
>
>
> Attempt
>
> Progress
>
> State
>
> Node
>
> Logs
>
> Started
>
> Finished
>
> Elapsed
>
> Note
> attempt_1382631533263_0012_m_000000_0 0,00 STARTING N/A N/A N/A
> N/A 0sec
>
>
> Don't know if this is a Hadoop problem or Pig, what do you think?
>
>
> El 25/10/13 13:11, Jarek Jarcec Cecho escribió:
> >It seems that Pig was correctly compiled against Hadoop 23, but the Cassandra piece was not, check out the where the exception is coming from:
> >
> >>Caused by: java.lang.IncompatibleClassChangeError: Found interface
> >>org.apache.hadoop.mapreduce.JobContext, but class was expected
> >>     at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:113)
> >So, I would say that you also need to get Hadoop 2 compatible Cassandra connector first.
> >
> >Jarcec
> >
> >On Thu, Oct 24, 2013 at 10:34:49PM -0300, Claudio Romo Otto wrote:
> >>After change from hadoop20 to hadoop23 the warning dissapeared but I
> >>got the same exception (Found interface
> >>org.apache.hadoop.mapreduce.JobContext, but class was expected)
> >>
> >>I have tried over a fresh install: hadoop 2.2.0 and pig 0.12.1
> >>compiled by me, no other product nor configuration, just two
> >>servers, one master with ResourceManager and NameNode, one slave
> >>with DataNode and NodeManager.
> >>
> >>I can't understand why over this fresh cluster Pig 0.12 fails. Here
> >>is the new trace:
> >>
> >>2013-10-24 16:10:52,351 [JobControl] ERROR
> >>org.apache.pig.backend.hadoop23.PigJobControl - Error while trying
> >>to run jobs.
> >>java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
> >>     at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:130)
> >>     at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> >>     at java.lang.Thread.run(Thread.java:724)
> >>     at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:257)
> >>Caused by: java.lang.reflect.InvocationTargetException
> >>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>     at java.lang.reflect.Method.invoke(Method.java:606)
> >>     at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> >>     ... 3 more
> >>Caused by: java.lang.IncompatibleClassChangeError: Found interface
> >>org.apache.hadoop.mapreduce.JobContext, but class was expected
> >>     at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:113)
> >>     at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
> >>     at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
> >