The overall tradeoff with "table vs prefix" is that the former adds some (small) amount of cluster management overhead for each new table, whereas the latter adds runtime overhead (memory, cpu, disk, etc) on every operation. In your case, since you're just talking about ~3 tables vs 1, my gut feeling is that the management overhead of doing different tables would be minuscule, and it's a better option. Per your description, there's never a case where you *want* a scan to return the different cube dimensions together (e.g. a scan that gets a1-b as well as a1-b1-c); that's the only thing I could see really forcing you to use one table instead of ~3.
On the other hand, if you're talking about an arbitrary number of tables (i.e. data would reside in different tables depending on the specific content of A, B and C), then that may be a different story. Every table has multiple regions, and each region has overhead like memstores, etc. So if you're talking about, say, thousands of tables, that could get to be very unwieldy. Generally, most people think of tables as a design-time thing (not that you can't create new tables at runtime, you can; but ops folks don't usually like it as much, because it removes a dimension of predictability).
The other alternative, which might be worth benchmarking, is not to do the higher level rollups at all, and aggregate at runtime based on scans of all the data. Depending on how much data you're talking about, you might find that the runtime hit is small enough that you don't care, and you can save yourself the extra development. I'd say, try some experiments that way first, before you put a ton of engineering into the fancy solution. You can also use things like coprocessors for aggregation, I've seen that done with great success.
On Apr 18, 2012, at 8:01 PM, Tom Brown wrote:
I'm writing an OLAP cube database and I can implement the storage in
one of two schemas, and I don't know if there's any unexpected
performance trade-offs I'm not aware of.
Each row represents a unique cell in the cube, with about 5 columns
for each row. The row key format is a set of attributes joined by a
delimiter. To represent a cube cell that contains rolled-up data, I
use fewer attributes in the row key.
For example, if a, b, and c are attributes, a single new piece of
information with all those attributes could affect rows "a-b-c",
"a-b", and "a".
When I perform a scan, I append "$" and "~" onto the desired start and
end keys (respectively) so I can be sure to get all the data in
between. When I want to get the most specific data, I can scan from
"a-b-c" to "a-b-c" and retrieve a single row. But a problem occurs
when rows with different types of attributes are interleaved when put
For example, if "a1", and "a2" are possible values for attribute "a",
the following sets of rows are in a bad order for efficient scans:
When I want to retrieve data for all rows of "a-b-c", I have to scan
from "a1-b-c" until after "a2-b-c", and skip the irrelevant rows.
The two possible solutions are to either put each data style in it's
own table (e.g. table "a-b-c" would contain "a1-b-c" and "a2-b-c",
table "a-b" would contain "a1-b" and "a2-b"), or to prefix each row
key with data that forces it into a useful order:
Placing each style of row into it's own table seems to be the ideal
solution, but I don't understand the internal HBase architecture
enough to know for sure. Will the overhead of having extra tables (if
there is any) outweigh the overhead of adding the prefix to each row
key? Is there a better way to structure my solution?
Thanks in advance!