What's the expected size of your unique key set? Thousands? Millions?

You could probably use a table structure similar to
just have it emit 1's instead of summing them.

I'm thinking maybe your mappings could be like this:
group=anything, type=NAME, name=John(etc...)

perhaps a ColumnQualifierGrouping iterator could be applied at scan time to
add up the cardinalities for the quals over the given time range being
scanned where cardinalities across different time units get aggregated
client side.
On Fri, May 16, 2014 at 5:19 PM, David Medinets <[EMAIL PROTECTED]>wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB