Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Exception while running simple hive query


Copy link to this message
-
Re: Exception while running simple hive query
Thanks Shashwat.

That did work. However I do find this behavior very weird that it is able
to find all other libs at their proper location on local filesystem but
searches for this particular one on HDFS. I'll try to dig deeper into the
code to see if I can find a cause for this happening.

On Mon, May 7, 2012 at 2:12 PM, shashwat shriparv <[EMAIL PROTECTED]
> wrote:

> Do one thing create the same structure   /Users/testuser/hive-0.9.0/
> lib/hive-builtins-0.9.0.jar on the hadoop file system and den try.. will
> work
>
> Shashwat Shriparv
>
>
> On Mon, May 7, 2012 at 11:57 PM, [EMAIL PROTECTED] <
> [EMAIL PROTECTED]> wrote:
>
>> Thanks for the reply.
>>
>> Assuming that you mean for permissions within the HIVE_HOME, they all
>> look ok to me. Is there anywhere else too you want me to check?
>>
>>
>> On Mon, May 7, 2012 at 11:16 AM, hadoop hive <[EMAIL PROTECTED]>wrote:
>>
>>> check for the permission..
>>>
>>>
>>> On Mon, May 7, 2012 at 7:30 PM, [EMAIL PROTECTED] <
>>> [EMAIL PROTECTED]> wrote:
>>>
>>>> I created a very simple hive table and then ran the following query that
>>>> should run a M/R job to return the results.
>>>>
>>>> hive> SELECT COUNT(*) FROM invites;
>>>>
>>>> But I am getting the following exception:
>>>>
>>>> java.io.FileNotFoundException: File does not exist:
>>>> /Users/testuser/hive-0.9.0/lib/hive-builtins-0.9.0.jar
>>>>
>>>> at
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:722)
>>>>
>>>> at
>>>> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:208)
>>>>
>>>> at
>>>> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:71)
>>>>
>>>> at
>>>> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:246)
>>>>
>>>> at
>>>> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:284)
>>>>
>>>> at
>>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:355)
>>>>
>>>> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1221)
>>>>
>>>> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
>>>>
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>
>>>> .....
>>>>
>>>> When I go to the given location, the jar does exist. It seems
>>>> like somehow it is searching for the jar in HDFS instead of the local
>>>> file system. Any suggestions on what I could be possible missing? My
>>>> hadoop version is 0.23.
>>>>
>>>
>>>
>>
>>
>> --
>> Swarnim
>>
>
>
>
> --
>
>
> ∞
> Shashwat Shriparv
>
>
>
--
Swarnim
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB