Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> No such file or directory error on simple query


Copy link to this message
-
Re: No such file or directory error on simple query
Hi Stephan ,

Please use the following  desc extended to see where is the table' s directory on hdfs. Here is an example.

hive -e "desc extended hcatsmokeid0b0abc02_date252113 ;"
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Logging initialized using configuration in jar:file:/usr/lib/hive/lib/hive-common-0.10.0.21.jar!/hive-log4j.properties
Hive history file=/tmp/root/hive_job_log_root_201301211932_612003297.txt
OK
id int
name string

Detailed Table Information Table(tableName:hcatsmokeid0b0abc02_date252113, dbName:default, owner:ambari_qa, createTime:1358814367, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:name, type:string, comment:null)], location:hdfs://ambari1:8020/apps/hive/warehouse/hcatsmokeid0b0abc02_date252113, inputFormat:org.apache.hadoop.hive.ql.io.RCFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.RCFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{transient_lastDdlTime=1358814367}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE)
Time taken: 2.965 seconds
 

Hope this helps.
Hortonworks, Inc.
Technical Support Engineer
Abdelrahman Shettia
[EMAIL PROTECTED]
Office phone: (708) 689-9609
How am I doing?   Please feel free to provide feedback to my manager Rick Morris at [EMAIL PROTECTED]
On Mar 2, 2013, at 1:59 AM, Stephen Boesch <[EMAIL PROTECTED]> wrote:

>
> I am struggling with a "no such file or directory exception " when running a simple query in hive.   It is unfortunate that the actual path  were not included with the stacktrace: the following is all that is provided.
>
> I have a query that fails with the following error when done as   hive -e "select * from <table>'". But it works properly when done within the hive shell.  But at the same time, doing hive> select * from <table2>;" fails with the same error message.
>
> I am also seeing this error both for hdfs files and for s3 files.  Without any path information it is  very difficult and time consuming to track this down.
>
> Any pointers appreciated.
>
>
> Automatically selecting local only mode for query
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/impala/impala_20130302095252_79ce9404-6af7-405b-8b06-849fe6c5328d.log
> ENOENT: No such file or directory
> at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
> at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:568)
> at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:411)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:501)
> at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:733)
> at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:692)
> at org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:172)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:910)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:895)
> at java.security.AccessController.doPrivileged(Native Method)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB