Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> What the optimization method of when to delete Zk connection?


Copy link to this message
-
Re: What the optimization method of when to delete Zk connection?
How about using HTablePool - doesn't that work for you?
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTablePool.html

--Suraj

On Tue, Jun 7, 2011 at 2:23 AM, bijieshan <[EMAIL PROTECTED]> wrote:
> Hi,
>
> As we know , the zk connection could be created by the following method:
>
> Configuration newConfig = new Configuration(originalConf);
> HConnection connection = HConnectionManager.getConnection(newConfig);
>
> One HConnection instance proxy to one Configuration instance. Under some scenarios, we can share Configuration instance, and also we can share the HConnection instance.
>
> Consider the following scenario:
>
> While the program running, there's so many scan operations. Each operation will be executed at a random time. There's two schemes:
> (Suppose the program shared one Configuation instance.)
>
> 1. Create one HTable at a time, after the scanning, delete the connection and close the HTable.
> 2. Create one HTable at a time, after the scanning ,only close the HTable. So remain the zk connection.
>
> (Maybe I can share the HTable instance, but it wasn't what I want to discuss here)
>
> Which is the best way? Or what the optimization method of when to delete Zk connection?
>
>
> Thanks!
>
> Jieshan Bean
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB