Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - delete operation with timestamp


Copy link to this message
-
Re: delete operation with timestamp
Shrijeet Paliwal 2011-11-29, 01:49
Lars,
Thank you for writing. It does make sense.

>>So if you trigger a Put operations from the client and you change (say) 3
columns, the server will insert 3 KeyValues into the Memstore all of which
carry
>>the TS of the Put.
What if I construct the Put object by calling three calls to 'add' with my
own timestamp:
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html#add(byte[],
byte[], long, byte[])
In such a case the the keyvalue list members will have different TS than
the TS of the put. What will be the meaning of TS of Put on server side now?

>>Having the TS per cell (or KeyValue) is necessary to enforce ACID
guarantees, which state that what you retrieve with Get is a set of
KeyValues such as this
>>combination of versions of KeyValues for this row existed together at a
point. (need to remember here that multiple Put operations could insert
different columns for the same rowKey).
Yes this totally makes sense. And my question is around this, what is the
need to maintain TS at put at all. Even if client does not want to specify
a timestamp , the burdon of including the latest timestamp can be passed to
KeyValue object.

-Shrijeet

On Mon, Nov 28, 2011 at 5:33 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:

> Hi Shrijeet,
>
> you have to distinguish between the storage format and the client side
> objects. KeyValue is an outlier (of sorts) as it is used on both server and
> client).
> Timestamps are per cell (KeyValue).
>
>
> A Put object is something you create on the client to describe a put
> operation to be performed at the server.
> The server will take the information from the Put and write the necessary
> KeyValues into the Memstore (which will eventually be flushed to disk).
>
> So if you trigger a Put operations from the client and you change (say) 3
> columns, the server will insert 3 KeyValues into the Memstore all of which
> carry
> the TS of the Put.
>
> Having the TS per cell (or KeyValue) is necessary to enforce ACID
> guarantees, which state that what you retrieve with Get is a set of
> KeyValues such as this
> combination of versions of KeyValues for this row existed together at a
> point. (need to remember here that multiple Put operations could insert
> different columns for the same rowKey).
>
>
> Makes sense?
>
> -- Lars
>
>
> ----- Original Message -----
> From: Shrijeet Paliwal <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]; lars hofhansl <[EMAIL PROTECTED]>
> Cc:
> Sent: Monday, November 28, 2011 4:31 PM
> Subject: Re: delete operation with timestamp
>
> Slightly offtopic, sorry.
>
> While we have attention on timestamps may I ask why HBase maintains a
> timestamp at row level (initialized with LATEST_TIMESTAMP)?
> In other words timestamp has meaning in context of a cell and HBase
> keeps it at that level, then why keep one TS at row level. Going
> further, what is the meaning of
> a timestamp 'ts' associated with Put object if all the KeyValue
> objects associated have timestamp different than 'ts'.
>
> Was the motivation behind this, to allow client not specify timestamp
> (in turn assume they meant latest ts)?
>
> I am looking at line 5 of this function http://pastebin.com/ik1Dxgqq
> which is serializing timestamp at row level and at lines 18-21 which
> are serializing timestamp at cell level.
>
> Thanks.
>
>
> On Mon, Nov 28, 2011 at 3:56 PM, lars hofhansl <[EMAIL PROTECTED]>
> wrote:
> > Hi Yi,
> > the reason is that nothing is ever changed in-place in HBase, only new
> files are created (with the exception of the WAL, which is appended to,
> > and some special scenario like atomic increment and atomic appends,
> where older version of the cells are removed from the memstore).
> >
> > That caters very well to the performance characteristics of the
> underlying distributed file system (HDFS).
> >
> >
> > Consequently deleted rows are not actually deleted right away, we just
> record the fact the rows should not be visible anymore and can eventually
> be removed.
> > The actual removal happens during the next compaction when new files are