Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> [HBase 0.92.1] Too many stores files to compact, compaction moving slowly


Copy link to this message
-
Re: [HBase 0.92.1] Too many stores files to compact, compaction moving slowly
Statck,
Ahh of course! Thank you. One question what partition file I give to
the top partitioner?
I am trying to parse your last comment.
"You could figure how many you need by looking at the output of your MR job"

Chicken and egg? Or am I not following you correctly.

-Shrijeet

On Mon, May 14, 2012 at 12:29 PM, Stack <[EMAIL PROTECTED]> wrote:
>
> On Sun, May 13, 2012 at 4:12 PM, Shrijeet Paliwal
> <[EMAIL PROTECTED]> wrote:
> >
>
> Can you write a MR job that rewrites the data once Shijeet?  It would
> take hfiles for input and it would write out hfiles only it'd write
> hfiles no bigger than a region max in size.  You'd use bulk importer
> to import (you'd also use total order partitioner so the output was
> totally sorted).  You'd have pre-split the table into enough regions
> before running the bulk import (You could figure how many you need by
> looking at the output of your MR job).
>
> St.Ack
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB