Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> on the impact of incremental counters


+
Claudio Martella 2011-06-18, 16:00
+
Andrew Purtell 2011-06-18, 19:24
+
Claudio Martella 2011-06-20, 12:58
+
Andrew Purtell 2011-06-20, 15:14
Copy link to this message
-
Re: on the impact of incremental counters
Is there any reason why the increment has to actually happen on
insert? Couldn't an "increment record" be kept, and then the actual
increment operation be performed lazily, on reads and compactions?

-Joey

On Mon, Jun 20, 2011 at 11:14 AM, Andrew Purtell <[EMAIL PROTECTED]> wrote:
>> From: Claudio Martella <[EMAIL PROTECTED]>
>> So, basically it's expensive to increment old data.
>
> HBase employs a buffer hierarchy to make updating a working set that can fit in RAM reasonably efficient. (But like I said there are some things remaining we can improve in terms of internal data structure management.)
>
> If you are updating a working set that does not fit in RAM or infrequently such that the value is not maintained in cache, then HBase has to go to disk and we move from the order of memory access to the order of disk access.
>
> It will obviously be more expensive to increment old data than newer, but I'm not sure I understand what you are getting at. Any data management system with a buffer hierarchy has this behavior.
>
> Compared to what?
>
>   - Andy
>
>

--
Joseph Echeverria
Cloudera, Inc.
443.305.9434
+
Ted Yu 2011-06-20, 15:36
+
Ted Dunning 2011-06-20, 15:50
+
Joe Pallas 2011-06-20, 17:27
+
Joey Echeverria 2011-06-20, 18:03
+
Jeff Whiting 2011-06-20, 18:28