Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> HTablePool dead locks/freezing


+
Varun Sharma 2012-12-04, 04:04
Copy link to this message
-
Re: HTablePool dead locks/freezing
Okay - this was contention issue -
https://issues.apache.org/jira/browse/HBASE-2939 solves the issue - upping
the IPC pool size. Thanks !

On Mon, Dec 3, 2012 at 8:04 PM, Varun Sharma <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I am using hbase 0.94.0 and am using the HTablePool - Reusable typre with
> a pool size of 50. I have a lot of threads using the htable pool
> concurrently (~ 3500) - The client side timeout is 5 seconds and the
> threads start okay producing good QPS to the hbase cluster, finally the QPS
> drops close to 0 (I also see some timeouts, not too many though). A jstack
> on this client revelas the following:
>
> "hbase-table-pool2461-thread-1" daemon prio=10 tid=0x000000000408c000
> nid=0x497d waiting for monitor entry [0x00007ff7a2f9d000]
>    java.lang.Thread.State: BLOCKED (on object monitor)
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1272)
>         - waiting to lock <0x0000000708ecd5a8> (a java.lang.String)
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1240)
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1227)
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.connect(HConnectionManager.java:1348)
>         at
> org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:209)
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1351)
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1339)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>         at java.lang.Thread.run(Thread.java:679)
>
> A large number of hbase pool threads are locked here -
> getHRegionConnection - seems like there is only one of these per region
> server. When I have 3500 threads thrashing this, I start off with good QPS
> and then it keeps falling until it goes to zero with a tonne of these
> htable pool threads blocked on this getHRegionConnection monitor. Its drops
> to state where I think these threads are pretty much deadlocked. Is this a
> known issue - having just one HConnection sounds quite suboptimal - do we
> connect to multiple sockets underneath the hood ?
>
> Thanks
> Varun
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB