Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Data management strategy

Copy link to this message
Re: Data management strategy

Let's see if I understand what you want to do...

You have some data and you want to store it in some table A.
Some of the records/rows in this table have a limited life span of 3 days, others have a limited life span of 3 months. But both are the same records? By this I mean that both records contain the same type of data but there is some business logic that determines which record gets deleted.
( like purge all records that haven't been accessed in the last 3 days.)

If what I imagine is true, you can't use the standard TTL unless you know that after a set N hours or days the record will be deleted. Like all records will self destruct 30 days past creation.

The simplest solution would be to have a column that contains a, timestamp of last access and your application controls when this field gets updated. Then using cron, launch a job that scans the table and removes the rows which meet your delete criteria.

Since co-processors are new... Not yet in any of the commercial releases, I would suggest keeping the logic simple. You can always refactor your code to use Co-processors when you've had time to play with them.

Even with coprocessors because the data dies an arbitrary death, you will still have to purge the data yourself. Hence the cron job that marks the record for deletion and then does a major compaction on the table to really delete the rows...

Of course the standard caveats apply, assuming I really did understand what you wanted...

Oh and KISS is always the best practice... :-)

Sent from a remote device. Please excuse any typos...

Mike Segel

On Dec 21, 2011, at 12:03 PM, Richard Lawrence <[EMAIL PROTECTED]> wrote:

> Hi
> I was wondering if I could seek some advance about data management in HBase?  I plan to use HBase to store data that has a  variable length lifespan, the vast majority will be short but occasionally the data life time will be significantly longer (3 days versus 3 months).  Once the lifespan is over I need the data to be deleted at some point in the near future (within a few day is fine).  I don’t think I can use standard TTL for this because that’s fixed at a column family level.  Therefore, my plan was to run script every few days that looks through external information for what needs to be kept and then updates HBase in some way so that it can understand.  With the data in HBase I can then use the standard TTL mechanism to clean up.
> The two ways I can think of to let HBase know are:
> Add a co-processor that updates timestamp on each read and then have my process simply read the data.  I shied away from this because the documentation indicated the co-processor can’t take row locks.  Does that imply that it shouldn’t modify the underlying data.  For my use case the timestamp doesn’t have to be perfect the keys are created in a such that the underlying data is fixed at creation time.
> Add an extra column to each row that’s a cache flag and rewrite that at various intervals so that the timestamp updates and prevents the TTL from deleting it.
> Are there other best practice alternatives?
> Thanks
> Richard