Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> Issues with delete markers


Copy link to this message
-
Re: Issues with delete markers
For user scans, i feel we should be passing delete markers through as well.
On Sun, Jun 30, 2013 at 12:35 PM, Varun Sharma <[EMAIL PROTECTED]> wrote:

> I tried this a little bit and it seems that filters are not called on
> delete markers. For raw scans returning delete markers, does it make sense
> to do that ?
>
> Varun
>
>
> On Sun, Jun 30, 2013 at 12:03 PM, Varun Sharma <[EMAIL PROTECTED]>wrote:
>
>> Hi,
>>
>> We are having an issue with the way HBase does handling of deletes. We
>> are looking to retrieve 300 columns in a row but the row has tens of
>> thousands of delete markers in it before we span the 300 columns something
>> like this
>>
>>
>> row  DeleteCol1 Col1  DeleteCol2 Col2 ................... DeleteCol3 Col3
>>
>> And so on. Therefore, the issue here, being that to retrieve these 300
>> columns, we need to go through tens of thousands of deletes - sometimes we
>> get a spurt of these queries and that DDoSes a region server. We are okay
>> with saying, only return first 300 columns and stop once you encounter, say
>> 5K column delete markers or something.
>>
>> I wonder if such a construct is provided by HBase or do we need to build
>> something on top of the RAW scan and handle the delete masking there.
>>
>> Thanks
>> Varun
>>
>>
>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB