Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> Some suggestions for future features


Copy link to this message
-
RE: Some suggestions for future features
Hi
>Are you guys thinking this would be a "helper" class that just iterates over rows and marks them as deleted? Or any attempt to make it a more fundamental atomic delete operation that can run for rows co-located on a single region server?

Not the 1st one in my mind.. May be that will have lot of time overhead. Thinking abt can be there a special kind of Delete marker itself? At one region level it will be 100% atomic.. Across regions delete how to handle in the best way need to explore more...    Delete & read consistency might not be that important in our case. Still need to look into that area as well....  I will try to do some experiment, then only things will come more clear  :)
-Anoop-
________________________________________
From: Ian Varley [[EMAIL PROTECTED]]
Sent: Wednesday, June 06, 2012 1:04 AM
To: [EMAIL PROTECTED]
Subject: Re: Some suggestions for future features

Are you guys thinking this would be a "helper" class that just iterates over rows and marks them as deleted? Or any attempt to make it a more fundamental atomic delete operation that can run for rows co-located on a single region server? If the latter, my understanding of the prevailing opinion (of Todd, etc) is that we're wary of exposing the region concept explicitly in the API at all, because it's an implementation detail today.

Ian

On Jun 5, 2012, at 2:15 PM, Anoop Sam John wrote:

> Hi,
>
>> 3. Row prefix delete operation - Delete all rows which starts with a 'prefix'
>
> Sometime back I was thinking about this. One usage came for us.[Which was low prioritized later ] I will try to work on this soon.
>
> -Anoop-
>
> ________________________________________
> From: Vladimir Rodionov [[EMAIL PROTECTED]]
> Sent: Wednesday, June 06, 2012 12:30 AM
> To: [EMAIL PROTECTED]
> Subject: RE: Some suggestions for future features
>
> My bad, getting through all HBase API is not easy task. I will look at Coprocessor. more closely
>
> Custom Key comparator allows to perform some useful tricks such as:
>
> keeping rows in a chronological order (temporal locality) and allowing to access them by some rowid at the same time.
>
> id1:day1
> id2:day1
> id3:day2
> id4:day2
> id5:day2
>
> etc
> We preserve temporal locality for rows and access to the rows by ID at the same time
>
> Although, HBase does not support Get by row prefix API call we can use Scanner for that purpose.
>
>
> Best regards,
> Vladimir Rodionov
> Principal Platform Engineer
> Carrier IQ, www.carrieriq.com
> e-mail: [EMAIL PROTECTED]
>
> ________________________________________
> From: [EMAIL PROTECTED] [[EMAIL PROTECTED]] On Behalf Of Stack [[EMAIL PROTECTED]]
> Sent: Tuesday, June 05, 2012 11:43 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Some suggestions for future features
>
> On Tue, Jun 5, 2012 at 11:19 AM, Vladimir Rodionov
> <[EMAIL PROTECTED]> wrote:
>>
>> 1. Custom Key, KeyValue, Row Comparators (per HTable). I went through the 0.92 API and it seems that adding this feature is going to be hard task. There are a lot of places all over HBase source code
>> where KeyValue static class members are accessed directly.
>
> Yes.  Its our most fundamental class.  Its tough to change since it
> there are a bunch of perf. fixes and a bunch of the upper tiers key
> off its format (and as you note, make direct calls against this
> class).
>
> That said, there is pressure building that KV should be an Interface
> only because others want to mess w/ its implementation (improve our
> cachability, compression).
>
> What would you like to do Vladimir?  You want to change comparators?
>
>> 2.Compaction callback. Can be HTable as well. Something like this one:
>>
>> public interface CompactionCallback<KeyValue>
>> {
>>    public void preCompact(KeyValue kv, CompactionContext ctx)
>> }
>>
>
> Is this not in Coproccessors now?  See trunk.  See Compactor.java down
> around #136 where the scanner used compacting is overrideable.
>
>
> St.Ack
>
> Confidentiality Notice:  The information contained in this message, including any attachments hereto, may be confidential and is intended to be read only by the individual or entity to whom this message is addressed. If the reader of this message is not the intended recipient or an agent or designee of the intended recipient, please note that any review, use, disclosure or distribution of this message or its attachments, in any form, is strictly prohibited.  If you have received this message in error, please immediately notify the sender and/or [EMAIL PROTECTED] and delete or destroy any copy of this message and its attachments.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB