Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> java.io.FileNotFoundException: File does not exist: Error while running Decision Tree Hadoop MapReduce


Copy link to this message
-
Re: java.io.FileNotFoundException: File does not exist: Error while running Decision Tree Hadoop MapReduce
res = ToolRunner.run(new Configuration(), new C45(), args);
whether this can be the reason ..in eclipse i am getting  the expected
output and a false result in cluster
I read that there are limitations in LocalJobRunner.
On Thu, Dec 12, 2013 at 4:21 PM, unmesha sreeveni <[EMAIL PROTECTED]>wrote:

> Yes i copied my input file as -copyFromLocal
>
> But the java.io.FileNotFoundException: File does not exist:
> /user/sree/C45/intermediate1.txt is happenning as part of my Reduce
> function
>
> C45 id=new C45();
> System.out.println("count "+cnt);
>
> Path input = new Path("C45/intermediate"+id.current_index+".txt");
>  //Path chck = new Path("C45/cnt_"+id.current_index+".txt");
>  Configuration conf = new Configuration();
> conf.set("fs.defaultFS", "hdfs://my remote-ip/");
>  conf.set("hadoop.job.ugi", "hdfs");
> FileSystem fs = FileSystem.get(conf);
>
> BufferedWriter bw = new BufferedWriter(
>  new OutputStreamWriter(fs.create(input, true)));
>  System.out.println("Text from Reducer: "+
> "C45/intermediate"+id.current_index+".txt"+ text);
>  System.out.println("file
> exists:/user/dataadmin/C45/intermediate"+id.current_index+".txt"+fs.exists(input));
>
> for(String str: text) {
>  bw.write(str);
> }
>
> bw.newLine();
>  bw.close();
>  As part of this code
> Several intermediate files are created:
> C45/intermediate0.txt
> C45/intermediate1.txt
> C45/intermediate2.txt
> C45/intermediate3.txt
> C45/intermediate4.txt
> C45/intermediate5.txt
> C45/intermediate6.txt
> C45/intermediate7.txt
> C45/rule.txt
> I tried it from Eclipse
> giving
> Configuration conf = new Configuration();
>  conf.set("fs.defaultFS", "hdfs://my remote-ip/");
>  conf.set("hadoop.job.ugi", "hdfs");
> FileSystem fs = FileSystem.get(conf);
> My decision tree is successfull
>
> But wen i export my program as jar file and run in cluster the above error
> is happenening
> But 2 files are created
> C45/intermediate0.txt and C45/rule.txt.....
> But this results in a wrong output.
>
>
> On Thu, Dec 12, 2013 at 3:37 PM, John Hancock <[EMAIL PROTECTED]>wrote:
>
>> Is the file actually in hdfs?  Did you run "hadoop dfs -copyFromLocal
>> <file-name> <hdfs-destination>"
>>
>>
>> On Wed, Dec 11, 2013 at 5:53 AM, unmesha sreeveni <[EMAIL PROTECTED]>wrote:
>>
>>> I am trying to run Decision Tree in Hadoop MapReduce.
>>>
>>> But it is showing java.io.FileNotFoundException: File does not exist: in
>>> cluster. But when i tried it from Eclipse it is showing the correct result
>>> with the below settings
>>>
>>> Configuration conf = new Configuration();
>>> conf.set("fs.defaultFS", "remotesystem-ip/");
>>> conf.set("hadoop.job.ugi", "hdfs");
>>> And it created all the intermediate files under /user/sree/C45
>>>
>>> hadoop fs -ls /user/sree/C45
>>> Found 9 items
>>> -rw-r--r--   3 sree supergroup        263 2013-12-11 15:56
>>> /user/sree/C45/intermediate0.txt
>>> -rw-r--r--   3 sree supergroup        106 2013-12-11 15:56
>>> /user/sree/C45/intermediate1.txt
>>> -rw-r--r--   3 sree supergroup        130 2013-12-11 15:56
>>> /user/sree/C45/intermediate2.txt
>>> -rw-r--r--   3 sree supergroup        128 2013-12-11 15:56
>>> /user/sree/C45/intermediate3.txt
>>> -rw-r--r--   3 sree supergroup         54 2013-12-11 15:56
>>> /user/sree/C45/intermediate4.txt
>>> -rw-r--r--   3 sree supergroup         50 2013-12-11 15:56
>>> /user/sree/C45/intermediate5.txt
>>> -rw-r--r--   3 sree supergroup         48 2013-12-11 15:56
>>> /user/sree/C45/intermediate6.txt
>>> -rw-r--r--   3 sree supergroup         53 2013-12-11 15:56
>>> /user/sree/C45/intermediate7.txt
>>> -rw-r--r--   3 sree supergroup         97 2013-12-11 15:56
>>> /user/sree/C45/rule.txt
>>> When i tried exporting my jar on remote cluster and ran my job it is
>>> showing
>>>
>>> In gainratio --- getcount
>>> file exists:C45/intermediate1.txtfalse
>>>
>>> java.io.FileNotFoundException: File does not exist:
>>> /user/sree/C45/intermediate1.txt
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:39)
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB