Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> java.io.FileNotFoundException(File does not exist) when running a hive query


+
Bhaskar, Snehalata 2013-03-03, 17:52
Copy link to this message
-
RE: java.io.FileNotFoundException(File does not exist) when running a hive query
Does anyone know how to solve this issue??

Thanks and regards,
Snehalata Deorukhkar
Nortel No : 0229 -5814

From: Bhaskar, Snehalata [mailto:[EMAIL PROTECTED]]
Sent: Sunday, March 03, 2013 11:23 PM
To: [EMAIL PROTECTED]
Subject: java.io.FileNotFoundException(File does not exist) when running a hive query

Hi,

I am getting "java.io.FileNotFoundException(File does not exist: /tmp/sb25634/hive_2013-03-01_23-21-43_428_5325193042224363842/-mr-10000/1/emptyFile)' " exception when running any join query :

Following is the query that I am using and exception thrown.
hive> select * from retail_1 l join retail_2 t on l.product_name=t.product_name;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):

  set hive.exec.reducers.bytes.per.reducer=<number>

In order to limit the maximum number of reducers:

  set hive.exec.reducers.max=<number>

In order to set a constant number of reducers:

  set mapred.reduce.tasks=<number>

WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.

Execution log at: /tmp/sb25634/sb25634_20130301232121_0c9f19d1-7846-4f4e-9469-401641fdd137.log

java.io.FileNotFoundException: File does not exist: /tmp/sb25634/hive_2013-03-01_23-21-43_428_5325193042224363842/-mr-10000/1/emptyFile

        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:787)

        at org.apache.hadoop.mapred.lib.CombineFileInputFormat$OneFileInfo.<init>(CombineFileInputFormat.java:462)

        at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256)

        at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212)

        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:392)

        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:358)

        at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:387)

        at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1041)

        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1033)

        at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)

        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)

        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:396)

        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)

        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)

        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:870)

        at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435)

        at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:677)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

        at java.lang.reflect.Method.invoke(Method.java:597)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: /tmp/sb25634/hive_2013-03-01_23-21-43_428_5325193042224363842/-mr-10000/1/emptyFile)'

Execution failed with exit status: 1

Obtaining error information

Task failed!

Task ID:

  Stage-1

Logs:

/tmp/sb25634/hive.log

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
What may be the cause of this error?

Please help me to resolve this issue.Thanks in advance.

Regards,
Snehalata Deorukhkar.
+
kulkarni.swarnim@...) 2013-03-04, 14:31
+
Arthur.hk.chan@... 2014-12-17, 09:24
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB