Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase client slows down

Copy link to this message
Re: HBase client slows down
I am using HTableInterface as a pool but I don't see any setautoflush
method. I am using 0.92.1 jar.

Also, how can I see if RS is getting overloaded? I looked at the UI and I
don't see anything obvious:

equestsPerSecond=0, numberOfOnlineRegions=1, numberOfStores=1,
numberOfStorefiles=1, storefileIndexSizeMB=0, rootIndexSizeKB=1,
totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, memstoreSizeMB=27,
readRequestsCount=126, writeRequestsCount=96157, compactionQueueSize=0,
flushQueueSize=0, usedHeapMB=44, maxHeapMB=3976, blockCacheSizeMB=8.79,
blockCacheFreeMB=985.34, blockCacheCount=11, blockCacheHitCount=23,
blockCacheMissCount=28, blockCacheEvictedCount=0, blockCacheHitRatio=45%,
blockCacheHitCachingRatio=67%, hdfsBlocksLocalityIndex=100

On Tue, Oct 9, 2012 at 10:32 AM, Doug Meil <[EMAIL PROTECTED]>wrote:

> It's one of those "it depends" answers.
> See this firstŠ
> http://hbase.apache.org/book.html#perf.writing
> Š Additionally, one thing to understand is where you are writing data.
> Either keep track of the requests per RS over the period (e.g., the web
> interface), or you can also track it on the client side with...
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#
> getRegionLocation%28byte[],%20boolean%29
> Š to know if you are continually hitting the same RS or spreading the load.
> On 10/9/12 1:27 PM, "Mohit Anchlia" <[EMAIL PROTECTED]> wrote:
> >I just have 5 stress client threads writing timeseries data. What I see is
> >after few mts HBaseClient slows down and starts to take 4 secs. Once I
> >kill
> >the client and restart it stays at sustainable rate for about 2 mts and
> >then again it slows down. I am wondering if there is something I should be
> >doing on the HBaseclient side? All the request are similar in terms of
> >data.