Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> CleanerChore exception


+
Jean-Marc Spaggiari 2012-12-30, 17:08
+
Ted Yu 2012-12-30, 17:44
+
Jean-Marc Spaggiari 2012-12-30, 17:53
+
Ted Yu 2012-12-30, 18:23
+
Jean-Marc Spaggiari 2012-12-30, 18:37
+
Jean-Marc Spaggiari 2012-12-30, 18:59
+
Ted Yu 2012-12-30, 19:11
+
Jean-Marc Spaggiari 2012-12-30, 19:25
+
Jean-Marc Spaggiari 2012-12-30, 19:50
+
Jesse Yates 2012-12-31, 00:13
+
Ted 2012-12-31, 00:29
+
Ted Yu 2012-12-30, 19:21
+
lars hofhansl 2012-12-30, 21:33
+
Jean-Marc Spaggiari 2012-12-30, 22:15
+
Ted 2012-12-30, 22:26
Copy link to this message
-
Re: CleanerChore exception
I'm not sure I'm getting that.

It's recursive. So when you are on the parent directory, you don't
know yet if the child directory is empty or not. So you can't call the
delete() yet. If you call the delet() giving "true" for recurs, then
you might delete some files who just got created, which we want to
avoid.

IMHO.

2012/12/30, Ted <[EMAIL PROTECTED]>:
> Thanks for your digging.
>
> Minor optimization would be to issue delete() on the parent directory so
> that there are fewer requests to namenode.
>
> Cheers
>
> On Dec 30, 2012, at 2:15 PM, Jean-Marc Spaggiari <[EMAIL PROTECTED]>
> wrote:
>
>> I did the change, pushed it and it cleaned my directories correctly.
>>
>> // if the directory doesn't exist or is empty, then we are done
>> if (children == null) return fs.delete(toCheck, false);
>>
>> The only thing is that I don't know what will fs.delete() return i
>> case the directory doesn't exist. But I think it's still correct to
>> return false if the directory doesn't exist because we can't really
>> delete something which doesn't exist...
>>
>> My opinion.
>>
>> So the patch is ready, easy one ;) Just waiting for Jesse's feedback
>> just in case.
>>
>> JM
>>
>> 2012/12/30, lars hofhansl <[EMAIL PROTECTED]>:
>>> Nothing has changed around this in 0.94.4 as far as I know.
>>>
>>>
>>>
>>>
>>> ________________________________
>>> From: Jean-Marc Spaggiari <[EMAIL PROTECTED]>
>>> To: [EMAIL PROTECTED]
>>> Sent: Sunday, December 30, 2012 9:53 AM
>>> Subject: Re: CleanerChore exception
>>>
>>> I was going to move to 0.94.4 today ;) And yes I'm using 0.94.3. I
>>> might wait a bit in case some testing is required with my version.
>>>
>>> Is this what you are looking for? http://pastebin.com/N8Q0FMba
>>>
>>> I will keep the files for now since it seems it's not causing any
>>> major issue. That will allow some more testing if required.
>>>
>>> JM
>>>
>>>
>>> 2012/12/30, Ted Yu <[EMAIL PROTECTED]>:
>>>> Looks like you're using 0.94.3
>>>>
>>>> The archiver is backport of:
>>>> HBASE-5547, Don't delete HFiles in backup mode
>>>>
>>>> Can you provide more the log where the IOE was reported using pastebin
>>>> ?
>>>>
>>>> Thanks
>>>>
>>>> On Sun, Dec 30, 2012 at 9:08 AM, Jean-Marc Spaggiari <
>>>> [EMAIL PROTECTED]> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I have a "IOException" /hbase/.archive/table_name is non empty
>>>>> exception every minute on my logs.
>>>>>
>>>>> There is 30 directories under this directory. the main directory is
>>>>> from yesterday, but all sub directories are from December 10th, all
>>>>> the same time.
>>>>>
>>>>> What does this .archive directory is used for, and what should I do?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> JM
>>>>
>
+
Ted 2012-12-31, 00:22
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB