Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> CleanerChore exception


+
Jean-Marc Spaggiari 2012-12-30, 17:08
+
Ted Yu 2012-12-30, 17:44
+
Jean-Marc Spaggiari 2012-12-30, 17:53
+
Ted Yu 2012-12-30, 18:23
+
Jean-Marc Spaggiari 2012-12-30, 18:37
+
Jean-Marc Spaggiari 2012-12-30, 18:59
+
Ted Yu 2012-12-30, 19:11
+
Jean-Marc Spaggiari 2012-12-30, 19:25
+
Jean-Marc Spaggiari 2012-12-30, 19:50
+
Jesse Yates 2012-12-31, 00:13
+
Ted 2012-12-31, 00:29
Copy link to this message
-
Re: CleanerChore exception
Looking at this line in checkAndDeleteDirectory():
    return canDeleteThis ? fs.delete(toCheck, false) : false;
If fs.delete() returns false, meaning the deletion was unsuccessful, the
parent directory tree wouldn't be deleted. I think this is inconsistent
with the javadoc for checkAndDeleteDirectory():
   * @throws IOException if there is an unexpected filesystem error

We should either throw IOE in that case, or try deleting the sub-directory
by specifying true as the third argument for delete().

Cheers

On Sun, Dec 30, 2012 at 11:11 AM, Ted Yu <[EMAIL PROTECTED]> wrote:

> Thanks for the digging. This concurs with my suspicion in the beginning.
>
> I am copying Jesse who wrote the code. He should have more insight on this.
>
> After his confirmation, you can log a JIRA.
>
> Cheers
>
>
> On Sun, Dec 30, 2012 at 10:59 AM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
>> So. Looking deeper I found few things.
>>
>> First, why checkAndDeleteDirectory is not "simply" calling
>> FSUtils.delete (fs, toCheck, true)? I guess it's doing the same thing?
>>
>> Also, FSUtils.listStatus(fs, toCheck, null); will return null if there
>> is no status. Not just an empty array. And it's returning null, we
>> will exit without calling the delete methode.
>>
>> I tried to manually create a file on one of those directories. The
>> exception disapears for 300 seconds because of the TTL for the newly
>> created file. After 300 seconds, the file I pushed AND the directory
>> got removed. So the issue is really with empty directories.
>>
>> I will take a look at what is in the trunk and in 0.94.4 to see if
>> it's the same issue. But I think we can simple change all this code by
>> a call to FSUtils.delete.
>>
>> I can open a JIRA and submit a patch for that. Just let me know.
>>
>> JM
>>
>> 2012/12/30, Jean-Marc Spaggiari <[EMAIL PROTECTED]>:
>> > Regargind the logcleaner settings, I have not changed anything. It's
>> > what came with the initial install. So I don't have anything setup for
>> > this plugin in my configuration files.
>> >
>> > For the files on the FS, here is what I have:
>> > hadoop@node3:~/hadoop-1.0.3$ bin/hadoop fs -ls
>> > /hbase/.archive/entry_duplicate
>> > Found 30 items
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
>> > /hbase/.archive/entry_duplicate/00c185bc44b6dcf85a90b83bdda4ec2e
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
>> > /hbase/.archive/entry_duplicate/0ddf0d1802c6afd97d032fd09ea9e37d
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
>> > /hbase/.archive/entry_duplicate/18cf7c5c946ddf33e49b227feedfb688
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
>> > /hbase/.archive/entry_duplicate/2353f10e79dacc5cf201be6a1eb63607
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:38
>> > /hbase/.archive/entry_duplicate/243f4007cf05415062010a5650598bff
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:38
>> > /hbase/.archive/entry_duplicate/287682333698e36cea1670f5479fbf18
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
>> > /hbase/.archive/entry_duplicate/3742da9bd798342e638e1ce341f27537
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:38
>> > /hbase/.archive/entry_duplicate/435c9c08bc08ed7248a013b6ffaa163b
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
>> > /hbase/.archive/entry_duplicate/45346b4b4248d77d45e031ea71a1fb63
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
>> > /hbase/.archive/entry_duplicate/4afe48fe6d8defe569f8632dd2514b07
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
>> > /hbase/.archive/entry_duplicate/68a4e364fe791a0d1f47febbb41e8112
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
>> > /hbase/.archive/entry_duplicate/7673d718962535c7b54cef51830f22a5
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:38
>> > /hbase/.archive/entry_duplicate/7df6845ae9d052f4eae4a01e39313d61
>> > drwxr-xr-x   - hbase supergroup          0 2012-12-10 14:39
+
lars hofhansl 2012-12-30, 21:33
+
Jean-Marc Spaggiari 2012-12-30, 22:15
+
Ted 2012-12-30, 22:26
+
Jean-Marc Spaggiari 2012-12-30, 22:42
+
Ted 2012-12-31, 00:22
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB