Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> on the impact of incremental counters


Copy link to this message
-
Re: on the impact of incremental counters
Is there any reason why the increment has to actually happen on
insert? Couldn't an "increment record" be kept, and then the actual
increment operation be performed lazily, on reads and compactions?

-Joey

On Mon, Jun 20, 2011 at 11:14 AM, Andrew Purtell <[EMAIL PROTECTED]> wrote:
>> From: Claudio Martella <[EMAIL PROTECTED]>
>> So, basically it's expensive to increment old data.
>
> HBase employs a buffer hierarchy to make updating a working set that can fit in RAM reasonably efficient. (But like I said there are some things remaining we can improve in terms of internal data structure management.)
>
> If you are updating a working set that does not fit in RAM or infrequently such that the value is not maintained in cache, then HBase has to go to disk and we move from the order of memory access to the order of disk access.
>
> It will obviously be more expensive to increment old data than newer, but I'm not sure I understand what you are getting at. Any data management system with a buffer hierarchy has this behavior.
>
> Compared to what?
>
>   - Andy
>
>

--
Joseph Echeverria
Cloudera, Inc.
443.305.9434