Are you following the guidelines as mentioned here:
On Thu, Jun 6, 2013 at 12:51 PM, Thilo Goetz <[EMAIL PROTECTED]> wrote:
> Hi all,
> I'm using hadoop 1.0 (yes it's old, but there is nothing I can do
> about that). I have some M/R programs what work perfectly on a
> single node setup. However, they consistently fail in the cluster
> I have available. I have tracked this down to the fact that extra
> jars I include on the command line with -libjars are not available
> on the slaves. I get FileNotFoundExceptions for those jars.
> For example, I run this:
> hadoop jar mrtest.jar my.MRTestJob -libjars JSON4J.jar in out
> The I get (on the slave):
> java.io.FileNotFoundException: File /local/home/hadoop/JSON4J.jar does not
> at org.apache.hadoop.fs.**RawLocalFileSystem.**getFileStatus(**
> at org.apache.hadoop.fs.**FilterFileSystem.**getFileStatus(**
> at org.apache.hadoop.filecache.**TaskDistributedCacheManager.**
> at org.apache.hadoop.mapred.**TaskTracker$4.run(TaskTracker.**
> at java.security.**AccessController.doPrivileged(**
> at javax.security.auth.Subject.**doAs(Subject.java:573)
> at org.apache.hadoop.security.**UserGroupInformation.doAs(**
> at org.apache.hadoop.mapred.**TaskTracker.initializeJob(**
> at org.apache.hadoop.mapred.**TaskTracker.localizeJob(**
> at org.apache.hadoop.mapred.**TaskTracker$5.run(TaskTracker.**
> at java.lang.Thread.run(Thread.**java:736)
> Where /local/home/hadoop is where I ran the code on the master.
> As far as I can tell from my internet research, this is supposed to
> work in hadoop 1.0, correct? It may well be that the cluster is
> somehow misconfigured (didn't set it up myself), so I would appreciate
> any hints as to what I should be looking at in terms of configuration.
> Oh and btw, the fat jar approach where I put all classes required by
> the M/R code in the main jar works perfectly. However, I would like
> to avoid that if I possibly can.
> Any help appreciated!