It looks if I execute function
on my job configuration this sets the variable
"mapreduce.task.classpath.user.precedence" to true which causes the jars I
supply to appear first on the classpath.
Now I'm getting a different exception
12/07/16 15:17:41 INFO mapred.JobClient: Task Id :
attempt_201207160219_0016_r_000000_1, Status : FAILED
java.io.IOException: The temporary job-output directory
but I think this is an unrelated issue and I'm hoping could be because I'm
out of space.
On Mon, Jul 16, 2012 at 12:17 AM, Jeremy Lewi <[EMAIL PROTECTED]> wrote:
> I printed out my classpath in the configure function of the mapper and
> reducer it looks like the jars in /usr/lib/hadoop/lib are still appearing
> first. So I must not be correctly setting the option to make my classpath
> Any ideas what I might be doing wrong?
> On Sun, Jul 15, 2012 at 11:34 PM, Jeremy Lewi <[EMAIL PROTECTED]> wrote:
>> Thanks Alan.
>> I'm still getting the same error as before. Here's how I'm running the job
>> hadoop jar ./target/contrail-1.0-SNAPSHOT-job.jar
>> contrail.avro.QuickMergeAvro -D mapreduce.task.classpath.first=true
>> --outputpath=/users/jlewi/staph/assembly/QuickMerge --K=45
>> I verified via the job tracker that the property
>> "mapreduce.task.classpath.first" is getting picked up.
>> It looks like the problem I'm dealing with is related to
>> Any ideas?
>> On Sun, Jul 15, 2012 at 2:00 AM, Alan Miller <[EMAIL PROTECTED]>wrote:
>>> Hi Just a quick idea.
>>> Also check ALL directories returned by
>>> hadoop classpath
>>> for any Avro related classes.
>>> I was struggling trying to use
>>> avro-1.7.0 with CDH4 but made it work
>>> by using the -libjars option and making sure my classes are used BEFORE
>>> the standard classes. There's a config
>>> property (dont remember) to set for
>>> that. Note the above setting is for the
>>> task's classpath, to control the
>>> classpath of your driver class set
>>> HADOOOP_CLASSPATH=... and
>>> Sent from my iPhone
>>> On Jul 15, 2012, at 3:59, "Jeremy Lewi" <[EMAIL PROTECTED]> wrote:
>>> > hi avro-users,
>>> > I'm getting the following exception when using avro 1.6.1 with CDH4.
>>> > java.lang.NoSuchMethodError:
>>> > The offending code is
>>> > GraphNodeData copy = (GraphNodeData)
>>> SpecificData.get().deepCopy(data.getSchema(), data);
>>> > where GraphNodeData is a class generated from my AVRO record.
>>> > The code runs just fine on CDH3. I tried rebuilding AVRO from source
>>> and installing it my local repo because of a previous post that said Avro
>>> 1.6.1 in maven had been built against CDH3. I also deleted all the avro jar
>>> files I found in
>>> > /usr/lib/hadoop
>>> > Any ideas? Thanks?
>>> > Jeremy