Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Dropping a very large table


Copy link to this message
-
Dropping a very large table
Hello,

I have a very large HBase table running on 0.90, large meaning >20K regions
with a max region size of 1GB. This table is legacy and can be dropped, but
 we aren't sure what impact disabling/dropping that large of a table will
have on our cluster.

We are using dropAsync and polling HTable#isEnabled instead of the standard
shell disable command to avoid a timeout during disable like in
https://issues.apache.org/jira/browse/HBASE-3432.
Is there any risk to overwhelming zookeeper or the master with region
closed events during the disable, or would it be comparable to what happens
during a cluster restart when RS closes out regions?  Additionally, are
there any concerns with wiping out that much data in HDFS at once during
the drop?

Thank you in advance,
Michael
--
+
冯宏华 2013-09-10, 04:22
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB