Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # dev - Simple stastics per region


+
lars hofhansl 2013-02-23, 06:40
Copy link to this message
-
Re: Simple stastics per region
Andrew Purtell 2013-02-23, 17:41
> Statistics would be kept per store (i.e. per region per column family)
and stored into an HBase table (one row per store).Initially we could just
support major compactions that atomically insert a new version of that
statistics for the store.

Will we drop updates to the statistics table if regions of it are in
transition? (I think that would be ok.)

Should we have a lightweight RPC for server to server communication that
does not block or retry?

The above two considerations would avoid a repeat of the region historian
trouble... ancient history.

Can we expect pretty quickly desire for more than just statistics on data
contributed after major compactions? That would be fine
for characterizing the data within, but doesn't provide any information
about access patterns to the data like I mentioned in the other email.
On Fri, Feb 22, 2013 at 10:40 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:

> This topic comes up now and then (see recent discussion about translating
> multi Gets into Scan+Filter).
>
> It's not that hard to keep statistics as part of compactions.
> I envision two knobs:
> 1. Max number of distinct values to track directly. If a column has less
> this # of values, keep track of their occurrences explicitly.
> 2. Number of (equal width) histogram partitions to maintain.
>
> Statistics would be kept per store (i.e. per region per column family) and
> stored into an HBase table (one row per store).Initially we could just
> support major compactions that atomically insert a new version of that
> statistics for the store.
>
> An simple implementation (not knowing ahead of time how many values it
> will see during the compaction) could start by keeping track of individual
> values for columns. If it gets past the max # of distinct values to track,
> start with equal width histograms (using the distinct values picket up so
> far to estimate an initial partition width).
> If the number of partition gets larger than what was configured it would
> increase the width and merge the previous counts into the new width (which
> means the new partition width must be a multiple of the previous size).
> There's probably a lot of other fanciness that could be used here (haven't
> spent a lot of time thinking about details).
>
>
> Is this something that should be in core HBase or rather be implemented as
> coprocessor?
>
>
> -- Lars
>

--
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
+
lars hofhansl 2013-02-23, 20:39
+
Andrew Purtell 2013-02-23, 17:18
+
Stack 2013-02-26, 22:08
+
Jesse Yates 2013-02-26, 22:31
+
Andrew Purtell 2013-02-26, 23:27
+
Enis Söztutar 2013-02-27, 00:15
+
lars hofhansl 2013-02-27, 00:27
+
Jesse Yates 2013-02-27, 00:31
+
Jesse Yates 2013-02-28, 01:52