Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> How to manage retry failures in the HBase client


Copy link to this message
-
Re: How to manage retry failures in the HBase client
Have you looked at
http://hbase.apache.org/book.html#hbase_default_configurations where
hbase.client.retries.number
and hbase.client.pause are explained ?

Cheers
On Tue, Sep 17, 2013 at 10:34 AM, Tom Brown <[EMAIL PROTECTED]> wrote:

> I have a region-server coprocessor that scans it's portion of a table based
> on a request and summarizes the results (designed this way to reduce
> network data transfer).
>
> In certain circumstances, the HBase cluster gets a bit overloaded, and a
> query will take too long. In that instance, the HBase client will retry the
> query (up to N times). When this happens, any other running queries will
> often timeout and generate retries as well. This results in the cluster
> becoming unresponsive, until I'm able to kill the clients that are retrying
> their requests.
>
> I have found the "hbase.client.retries.number" property, but that doesn't
> claim to set the number of retries, rather the amount of time between
> retries. Is there a different property I can use to set the maximum number
> of retries? Or is this property mis-documented?
>
> Thanks in advance!
>
> --Tom
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB