Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Failed to move to trash [Re: Drop table skiptrash?]


Copy link to this message
-
Re: Failed to move to trash [Re: Drop table skiptrash?]
Possibly because the folders above the trash folders do not exist or the
permissions are wrong for /user/hive. Hive install will not create this
directory in any way.

On Mon, Dec 24, 2012 at 10:56 AM, Periya.Data <[EMAIL PROTECTED]> wrote:

> Thanks to Ed and Nitin. Even after setting trash interval to zero, I get
> the following error message. I shall try the brute force "dfs -rmr
> skipTrash.." now.
>
> What does this message mean?:
>
> FAILED: Error in metadata: MetaException(message:Got exception:
> java.io.IOException Failed to move to trash: hdfs://
> xxxxxx.xxx.com/hive/max_sum_tbl)
>
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
>
> I saw the logs as well and the same error message was there. I did not see
> any additional information as to why it "failed to move to trash".  I am
> assuming that it occurs if the user directory doesn't have enough space
> ...please correct me.
>
> Thanks,
> PD.
>
>
> On Mon, Dec 24, 2012 at 5:37 AM, Edward Capriolo <[EMAIL PROTECTED]>wrote:
>
>> Well in hive you can set hadoop variables so you can use the set command
>> to explicitly disable trash on the client
>>
>> set x=y
>>
>> If that does not work it is possible to run dfs commands from hive. Just
>> do your normal hadoop dfs command without specifying hadoop.
>>
>> hive > dfs -rmr /user/hive/warehouse
>>
>>
>> On Mon, Dec 24, 2012 at 2:46 AM, Periya.Data <[EMAIL PROTECTED]>wrote:
>>
>>> Hi,
>>>    Is there a way to drop a table such that the contents do not go to
>>> the .Trash dir? I have limited diskQuota in hadoop and am running large
>>> Hive jobs in sequence. I would like to drop tables as and when they become
>>> unnecessary and also they must not end up in .Trash..as they occupy lot of
>>> space.
>>>
>>> Before I begin my Hive job, I do clean out  (hadoop fs -rmr -skipTrash
>>> ...). I would like to know if there something like this to add in my hql
>>> file.
>>>
>>> Thanks,
>>> PD.
>>>
>>>
>>>
>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB