Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> HTableFactory::releaseHTableInterface causes ClosedConnectionException?


Copy link to this message
-
HTableFactory::releaseHTableInterface causes ClosedConnectionException?
Good day, everyone,

   We're using a slightly modified HTablePool (based on v.0.90.4-cdh3u3,
https://gist.github.com/3834258) The rest of the client library is
v.0.90.6-cdh3u3.

   The mod (amongst other things) calculates average load on every table in
the pool, and removes connections, when they're no longer needed:

    /**
     * Removes pooled HTable connections, when table load goes down
     */
    private void resizeTablePool() {
        log.info("[HTablePool] cleaning up unused connections...");
        for (String tableName : activeTableConnectionsCount.keySet()) {
            Integer tableMeanLoad = getTableMeanLoad(tableName);
            LinkedList<HTableInterface> queue = tables.get(tableName);
            synchronized (queue) {
                int tablePoolSize = queue.size();
                if (tablePoolSize > tableMeanLoad && tablePoolSize >
coreTablePoolSize) {
                    log.info(String.format("[HTablePool] for [%s] is too
big: resizing from %s to %s...", tableName, tablePoolSize, tableMeanLoad));
                    int poolOversized = tablePoolSize - tableMeanLoad;
                    while (poolOversized > 0) {
                        HTableInterface table = queue.poll();
                        this.tableFactory.releaseHTableInterface(table);
                        poolOversized--;
                    }
                }
            }
        }
    }

 We're observing that, when resizeTablePool() is called, all the
connections (even those that we're not in the pool at the moment of
cleanup) suddenly get closed:

2012-10-04 12:49:45,526 ERROR [2010769070] [3583971]
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@5a6597d1closed
org.apache.hadoop.hbase.client.ClosedConnectionException:
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@5a6597d1closed
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1279)
        at
org.apache.hadoop.hbase.client.HTable.incrementColumnValue(HTable.java:756)

Does anyone have a clue? Could releaseHTableInterface() cause such behavior?

--
-----

Twitter: twitter.com/remeniuk
Blog: vasilrem.com
Github: github.com/remeniuk
Scala Enthusiasts Belarus: scala.by
<http://twitter.com/scalaby>
StackOverflow: stackoverflow.com/users/354067/vasil-remeniuk
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB