Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # user >> Jobs failing with ClassNotFoundException


+
Chris Sigman 2013-02-14, 18:41
+
Billie Rinaldi 2013-02-14, 18:51
+
Chris Sigman 2013-02-14, 18:53
+
Keith Turner 2013-02-14, 19:17
+
Chris Sigman 2013-02-14, 19:20
+
William Slacum 2013-02-14, 19:34
+
Chris Sigman 2013-02-14, 20:17
+
Chris Sigman 2013-02-14, 21:07
+
Keith Turner 2013-02-14, 21:15
Copy link to this message
-
Re: Jobs failing with ClassNotFoundException
Hi everyone,

I've figured out what's going on.  I'm not quite sure why, but specifying
the class name for the job was messing up the options parsing, causing it
to never process any of the actual arguments.  I still can't run through
tool.sh since it expects the first argument to be the class name for the
job, but that's rather inconsequential.

Thanks everyone for the help,
--
Chris
On Thu, Feb 14, 2013 at 4:15 PM, Keith Turner <[EMAIL PROTECTED]> wrote:

> On Thu, Feb 14, 2013 at 4:07 PM, Chris Sigman <[EMAIL PROTECTED]> wrote:
> > Is it possible that ToolRunner.run isn't working right? How might I
> > determine that it's putting the libs into the distributed cache?
>
> If you look at the resulting config that generated for the map reduce
> job you may see something of use there.
>
> >
> >
> > --
> > Chris
> >
> >
> > On Thu, Feb 14, 2013 at 3:17 PM, Chris Sigman <[EMAIL PROTECTED]>
> wrote:
> >>
> >> All of those jars exist, and there aren't any differences in those from
> >> when I run one of the example jobs.  I'm also using ToolRunner.run.
> >>
> >>
> >> --
> >> Chris
> >>
> >>
> >> On Thu, Feb 14, 2013 at 2:34 PM, William Slacum
> >> <[EMAIL PROTECTED]> wrote:
> >>>
> >>> Make sure that all of the jars you pass to libjars exist and you're
> using
> >>> ToolRunner.run, which will parse out those options.
> >>>
> >>>
> >>> On Thu, Feb 14, 2013 at 2:20 PM, Chris Sigman <[EMAIL PROTECTED]>
> wrote:
> >>>>
> >>>> Yes, everything's readable by everyone.  As I said before, the odd
> thing
> >>>> is that running one of the example jobs like Wordcount work just fine.
> >>>>
> >>>>
> >>>> --
> >>>> Chris
> >>>>
> >>>>
> >>>> On Thu, Feb 14, 2013 at 2:17 PM, Keith Turner <[EMAIL PROTECTED]>
> wrote:
> >>>>>
> >>>>> On Thu, Feb 14, 2013 at 1:53 PM, Chris Sigman <[EMAIL PROTECTED]>
> >>>>> wrote:
> >>>>> > Yep, all of the jars are also available on the datanodes
> >>>>>
> >>>>> Also are the jars readable by the user running the M/R job?
> >>>>>
> >>>>> >
> >>>>> >
> >>>>> > --
> >>>>> > Chris
> >>>>> >
> >>>>> >
> >>>>> > On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <[EMAIL PROTECTED]
> >
> >>>>> > wrote:
> >>>>> >>
> >>>>> >> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <
> [EMAIL PROTECTED]>
> >>>>> >> wrote:
> >>>>> >>>
> >>>>> >>> Hi everyone,
> >>>>> >>>
> >>>>> >>> I've got a job I'm running that I can't figure out why it's
> >>>>> >>> failing.
> >>>>> >>> I've tried running jobs from the examples, and they work just
> fine.
> >>>>> >>> I'm
> >>>>> >>> running the job via
> >>>>> >>>
> >>>>> >>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst
> >>>>> >>> > namenode
> >>>>> >>> > root pass stockdata movingaverage
> >>>>> >>>
> >>>>> >>> which I see is running the following exec call that seems perfect
> >>>>> >>> to me:
> >>>>> >>>
> >>>>> >>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
> >>>>> >>> movingaverage.MAJob -libjars
> >>>>> >>>
> >>>>> >>>
> "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
> >>>>> >>> inst namenode root pass tmpdatatable movingaverage
> >>>>> >>
> >>>>> >>
> >>>>> >> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your
> hadoop
> >>>>> >> nodes,
> >>>>> >> specifically the one that's running the map?
> >>>>> >>
> >>>>> >> Billie
> >>>>> >>
> >>>>> >>
> >>>>> >>>
> >>>>> >>>
> >>>>> >>> but when the job runs, it gets to the map phase and fails:
> >>>>> >>>
> >>>>> >>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
> >>>>> >>> attempt_201301171408_0293_m_000000_0, Status : FAILED
> >>>>
+
John Vines 2013-02-14, 19:11
+
Chris Sigman 2013-02-14, 19:14