Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive, mail # user - Failed to move to trash [Re: Drop table skiptrash?]


Copy link to this message
-
Failed to move to trash [Re: Drop table skiptrash?]
Periya.Data 2012-12-24, 15:56
Thanks to Ed and Nitin. Even after setting trash interval to zero, I get
the following error message. I shall try the brute force "dfs -rmr
skipTrash.." now.

What does this message mean?:

FAILED: Error in metadata: MetaException(message:Got exception:
java.io.IOException Failed to move to trash: hdfs://
xxxxxx.xxx.com/hive/max_sum_tbl)

FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask

I saw the logs as well and the same error message was there. I did not see
any additional information as to why it "failed to move to trash".  I am
assuming that it occurs if the user directory doesn't have enough space
...please correct me.

Thanks,
PD.
On Mon, Dec 24, 2012 at 5:37 AM, Edward Capriolo <[EMAIL PROTECTED]>wrote:

> Well in hive you can set hadoop variables so you can use the set command
> to explicitly disable trash on the client
>
> set x=y
>
> If that does not work it is possible to run dfs commands from hive. Just
> do your normal hadoop dfs command without specifying hadoop.
>
> hive > dfs -rmr /user/hive/warehouse
>
>
> On Mon, Dec 24, 2012 at 2:46 AM, Periya.Data <[EMAIL PROTECTED]>wrote:
>
>> Hi,
>>    Is there a way to drop a table such that the contents do not go to the
>> .Trash dir? I have limited diskQuota in hadoop and am running large Hive
>> jobs in sequence. I would like to drop tables as and when they become
>> unnecessary and also they must not end up in .Trash..as they occupy lot of
>> space.
>>
>> Before I begin my Hive job, I do clean out  (hadoop fs -rmr -skipTrash
>> ...). I would like to know if there something like this to add in my hql
>> file.
>>
>> Thanks,
>> PD.
>>
>>
>>
>