Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Issue with -libjars option in cluster in Hadoop 1.0


Copy link to this message
-
Re: Issue with -libjars option in cluster in Hadoop 1.0
Are you following the guidelines as mentioned here:
http://grepalex.com/2013/02/25/hadoop-libjars/

Regards,
Shahab
On Thu, Jun 6, 2013 at 12:51 PM, Thilo Goetz <[EMAIL PROTECTED]> wrote:

> Hi all,
>
> I'm using hadoop 1.0 (yes it's old, but there is nothing I can do
> about that).  I have some M/R programs what work perfectly on a
> single node setup.  However, they consistently fail in the cluster
> I have available.  I have tracked this down to the fact that extra
> jars I include on the command line with -libjars are not available
> on the slaves.  I get FileNotFoundExceptions for those jars.
>
> For example, I run this:
>
> hadoop jar mrtest.jar my.MRTestJob -libjars JSON4J.jar in out
>
> The I get (on the slave):
>
> java.io.FileNotFoundException: File /local/home/hadoop/JSON4J.jar does not
> exist.
>         at org.apache.hadoop.fs.**RawLocalFileSystem.**getFileStatus(**
> RawLocalFileSystem.java:397)
>         at org.apache.hadoop.fs.**FilterFileSystem.**getFileStatus(**
> FilterFileSystem.java:251)
>         at org.apache.hadoop.filecache.**TaskDistributedCacheManager.**
> setupCache(TaskDistributedCac\
> heManager.java:179)
>         at org.apache.hadoop.mapred.**TaskTracker$4.run(TaskTracker.**
> java:1193)
>         at java.security.**AccessController.doPrivileged(**
> AccessController.java:284)
>         at javax.security.auth.Subject.**doAs(Subject.java:573)
>         at org.apache.hadoop.security.**UserGroupInformation.doAs(**
> UserGroupInformation.java:**1128)
>         at org.apache.hadoop.mapred.**TaskTracker.initializeJob(**
> TaskTracker.java:1184)
>         at org.apache.hadoop.mapred.**TaskTracker.localizeJob(**
> TaskTracker.java:1099)
>         at org.apache.hadoop.mapred.**TaskTracker$5.run(TaskTracker.**
> java:2382)
>         at java.lang.Thread.run(Thread.**java:736)
>
> Where /local/home/hadoop is where I ran the code on the master.
>
> As far as I can tell from my internet research, this is supposed to
> work in hadoop 1.0, correct?  It may well be that the cluster is
> somehow misconfigured (didn't set it up myself), so I would appreciate
> any hints as to what I should be looking at in terms of configuration.
>
> Oh and btw, the fat jar approach where I put all classes required by
> the M/R code in the main jar works perfectly.  However, I would like
> to avoid that if I possibly can.
>
> Any help appreciated!
>
> --Thilo
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB