Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Filtering/Collection columns during Major Compaction


Copy link to this message
-
Filtering/Collection columns during Major Compaction
Hi,

My understanding of major compaction is that it rewrites one store file and
does a merge of the memstore, store files on disk and cleans out delete
tombstones and puts prior to them and cleans out excess versions. We want
to limit the number of columns per row in hbase. Also, we want to limit
them in lexicographically sorted order - which means we take the top, say
100 smallest columns (in lexicographical sense) and only keep them while
discard the rest.

One way to do this would be to clean out columns in a daily mapreduce job.
Or another way is to clean them out during the major compaction which can
be run daily too. I see, from the code that a major compaction essentially
invokes a Scan over the region - so if the Scan is invoked with the
appropriate filter (say ColumnCountGetFilter) - would that do the trick ?

Thanks
Varun
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB