Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Issue with -libjars option in cluster in Hadoop 1.0


Copy link to this message
-
Re: Issue with -libjars option in cluster in Hadoop 1.0
It is trying to read the JSON4J.jar from local/home/hadoop. Does that jar
exist at this path on the client from which you are invoking it? Does this
jar in the current dir from which your are kicking off the job?
On Thu, Jun 6, 2013 at 1:33 PM, Thilo Goetz <[EMAIL PROTECTED]> wrote:

> On 06/06/2013 06:58 PM, Shahab Yunus wrote:
>
>> Are you following the guidelines as mentioned here:
>> http://grepalex.com/2013/02/**25/hadoop-libjars/<http://grepalex.com/2013/02/25/hadoop-libjars/>
>>
>
> Now I am, so thanks for that :-)
>
> Still doesn't work though.  Following the hint in that
> post I looked at the job config, which has this:
> tmpjars file:/local/home/hadoop/**JSON4J.jar
>
> I assume that's the correct value.  Any other ideas?
>
> --Thilo
>
>
>> Regards,
>> Shahab
>>
>>
>> On Thu, Jun 6, 2013 at 12:51 PM, Thilo Goetz <[EMAIL PROTECTED]
>> <mailto:[EMAIL PROTECTED]>> wrote:
>>
>>     Hi all,
>>
>>     I'm using hadoop 1.0 (yes it's old, but there is nothing I can do
>>     about that).  I have some M/R programs what work perfectly on a
>>     single node setup.  However, they consistently fail in the cluster
>>     I have available.  I have tracked this down to the fact that extra
>>     jars I include on the command line with -libjars are not available
>>     on the slaves.  I get FileNotFoundExceptions for those jars.
>>
>>     For example, I run this:
>>
>>     hadoop jar mrtest.jar my.MRTestJob -libjars JSON4J.jar in out
>>
>>     The I get (on the slave):
>>
>>     java.io.FileNotFoundException: File /local/home/hadoop/JSON4J.jar
>>     does not exist.
>>              at
>>     org.apache.hadoop.fs.__**RawLocalFileSystem.__**getFileStatus(__**
>> RawLocalFileSystem.java:397)
>>              at
>>     org.apache.hadoop.fs.__**FilterFileSystem.__**getFileStatus(__**
>> FilterFileSystem.java:251)
>>              at
>>     org.apache.hadoop.filecache.__**TaskDistributedCacheManager.__**
>> setupCache(TaskDistributedCac\
>>     heManager.java:179)
>>              at
>>     org.apache.hadoop.mapred.__**TaskTracker$4.run(TaskTracker.**
>> __java:1193)
>>              at
>>     java.security.__**AccessController.doPrivileged(**
>> __AccessController.java:284)
>>              at javax.security.auth.Subject.__**doAs(Subject.java:573)
>>              at
>>     org.apache.hadoop.security.__**UserGroupInformation.doAs(__**
>> UserGroupInformation.java:__**1128)
>>              at
>>     org.apache.hadoop.mapred.__**TaskTracker.initializeJob(__**
>> TaskTracker.java:1184)
>>              at
>>     org.apache.hadoop.mapred.__**TaskTracker.localizeJob(__**
>> TaskTracker.java:1099)
>>              at
>>     org.apache.hadoop.mapred.__**TaskTracker$5.run(TaskTracker.**
>> __java:2382)
>>              at java.lang.Thread.run(Thread.__**java:736)
>>
>>     Where /local/home/hadoop is where I ran the code on the master.
>>
>>     As far as I can tell from my internet research, this is supposed to
>>     work in hadoop 1.0, correct?  It may well be that the cluster is
>>     somehow misconfigured (didn't set it up myself), so I would appreciate
>>     any hints as to what I should be looking at in terms of configuration.
>>
>>     Oh and btw, the fat jar approach where I put all classes required by
>>     the M/R code in the main jar works perfectly.  However, I would like
>>     to avoid that if I possibly can.
>>
>>     Any help appreciated!
>>
>>     --Thilo
>>
>>
>>
>