Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - How to config hbase0.94.2 to retain deleted data


+
yun peng 2012-10-21, 20:53
+
Michael Segel 2012-10-21, 23:34
+
lars hofhansl 2012-10-22, 00:23
+
Michael Segel 2012-10-22, 01:56
+
Michael Segel 2012-10-23, 04:18
+
lars hofhansl 2012-10-23, 05:22
Copy link to this message
-
Re: How to config hbase0.94.2 to retain deleted data
Michael Segel 2012-10-23, 11:41
"Deleted cells are still subject to TTL and there will never be more than "maximum number of versions" deleted cells. A new "raw" scan options returns all deleted rows and the delete markers. "

This is different from the idea suggested by the OP. Here deleted cells still get deleted. Just that when the compaction flag comes along, its told to ignore them.

So if I say a column can have 3 versions (cells) then if I insert another value for that row:column key, I push that deleted cell down the stack.  Enough times, its gone.

In theory, this feature would be useful if I wanted an OLTP implementation on top of HBase. It would allow the transaction to bridge a compaction cycle. However, that's pretty much it.

This feature doesn't translate well beyond this.

It also begs the following:  How do I handle a long transaction (OLTP)  timeouts, and isolation levels?

If you look at this at the row level... definitely not a good idea. Think of fat clogging an artery.
  
On Oct 23, 2012, at 12:22 AM, lars hofhansl <[EMAIL PROTECTED]> wrote:

> http://hbase.apache.org/book/cf.keep.deleted.html
>
> Without it you cannot do correct as-of-time queries when it comes to deletes.
>
> -- Lars
>
> From: Michael Segel <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]; lars hofhansl <[EMAIL PROTECTED]>
> Sent: Monday, October 22, 2012 9:18 PM
> Subject: Re: How to config hbase0.94.2 to retain deleted data
>
> >
> > Curious, why do you think this is better than using the keep-deleted-cells feature?
> > (It might well be, just curious)
>
> Ok... so what exactly does this feature mean?
>
> Suppose I have 500 rows within a region. I set this feature to be true.
> I do a massive delete and there are only 50 rows left standing.
>
> So if I do a count of the number of rows in the region, I see only 50, yet if I compact the table, its still full.
>
> Granted I'm talking about rows and not cells, but the idea is the same. IMHO you're asking for more headaches that you solve.
>
> KISS would suggest that moving deleted data in to a different table would yield better performance in the long run.
>
>
> On Oct 21, 2012, at 7:23 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:
>
> > That'd work too. Requires the regionservers to make remote updates to other regionservers, though. And you have to trap each and every change (Put, Delete, Increment, Append, RowMutations, etc)
> >
> >
> > Curious, why do you think this is better than using the keep-deleted-cells feature?
> > (It might well be, just curious)
> >
> >
> > -- Lars
> >
> >
> >
> > ----- Original Message -----
> > From: Michael Segel <[EMAIL PROTECTED]>
> > To: [EMAIL PROTECTED]
> > Cc:
> > Sent: Sunday, October 21, 2012 4:34 PM
> > Subject: Re: How to config hbase0.94.2 to retain deleted data
> >
> > I would suggest that you use your coprocessor to copy the data to a 'backup' table when you mark them for delete.
> > Then as major compaction hits, the rows are deleted from the main table, but still reside undeleted in your delete table.
> > Call it a history table.
> >
> >
> > On Oct 21, 2012, at 3:53 PM, yun peng <[EMAIL PROTECTED]> wrote:
> >
> >> Hi, All,
> >> I want to retain all deleted key-value pairs in hbase. I have tried to
> >> config HColumnDescript as follow to make it return deleted.
> >>
> >>  public void postOpen(ObserverContext<RegionCoprocessorEnvironment> e) {
> >>    HTableDescriptor htd = e.getEnvironment().getRegion().getTableDesc();
> >>    HColumnDescriptor hcd = htd.getFamily(Bytes.toBytes("cf"));
> >>    hcd.setKeepDeletedCells(true);
> >>    hcd.setBlockCacheEnabled(false);
> >>  }
> >>
> >> However, it does not work for me, as when I issued a delete and then query
> >> by an older timestamp, the old data does not show up.
> >>
> >> hbase(main):119:0> put 'usertable', "key1", 'cf:c1', "v1", 99
> >> hbase(main):120:0> put 'usertable', "key1", 'cf:c1', "v2", 101
> >> hbase(main):121:0> delete 'usertable', "key1", 'cf:c1', 100
+
lars hofhansl 2012-10-23, 18:35
+
Michael Segel 2012-10-23, 18:40
+
lars hofhansl 2012-10-23, 18:47
+
Marcos Ortiz Valmaseda 2012-10-22, 02:12
+
lars hofhansl 2012-10-21, 23:04
+
yun peng 2012-10-22, 00:20
+
lars hofhansl 2012-10-22, 04:34
+
PG 2012-10-23, 22:01