Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> HBase Tuning


+
Ricardo Vilaça 2012-10-10, 12:51
+
Stack 2012-10-11, 03:24
+
Mohit Anchlia 2012-10-11, 04:25
+
Stack 2012-11-21, 05:39
+
Ricardo Vilaça 2012-10-12, 10:56
+
Vincent Barat 2012-11-20, 18:54
Copy link to this message
-
Re: HBase Tuning
Forget about this: it does not help

Le 20/11/12 19:54, Vincent Barat a �crit :
> Hi,
>
> It seems there is a potential contention in the HBase client code
> (a useless synchronized method)
> You may try to use this patch :
> https://issues.apache.org/jira/browse/HBASE-7069
>
> I face similar issues on my production cluster since I upgraded to
> HBase 0.92. I will test this patch tomorrow...
> More info matter.
>
> Cheers
>
> Le 12/10/12 12:56, Ricardo Vila�a a �crit :
>> Hi,
>>
>> Em 11/10/12 04:24, Stack escreveu:
>>> On Wed, Oct 10, 2012 at 5:51 AM, Ricardo Vila�a
>>> <[EMAIL PROTECTED]> wrote:
>>>> However, when adding an additional client node, with also 400
>>>> clients,
>>>> the latency increases 3 times,
>>>> but the RegionServers remains idle more than 80%. I had tried
>>>> different
>>>> values for the hbase.regionserver.handler.count and also
>>>> for the hbase.client.ipc.pool size and type but without any
>>>> improvement.
>>>>
>>> I was going to suggest that it sounded like all handlers are
>>> occupied... but it sounds like you tried upping them.
>> Yes, had already tried to increase to 200 but without improvement
>> on the application latency. However, the output of the active IPC
>> handlers, using the Web interface,
>> is strange. For region servers  I can see in a given instant at
>> most 4
>> IPC handler active but if I
>> see the state of all other IPC handlers they are waiting for 0
>> seconds.
>> In the master the IPC handlers are also almost all in the waiting
>> state
>> but for a few seconds.
>>> Is this going from one client node (serving 400 clients) to two
>>> client
>>> nodes (serving 800 clients)?
>> Yes, the huge increase in latency is when going for one client
>> node to
>> two client nodes. However, increasing the number of clients in a
>> single
>> node also adds to latency but a small increase.
>>> Where are you measuring from? Application side?  Can you figure
>>> if we
>>> are binding up in HBase or in the client node?
>> This measures are from the application  side. As the huge
>> increase in
>> latency
>> is happening when increasing the number of clients I suspect that
>> the
>> binding up is in the
>> HBase maybe due to some incorrect configuration.
>>
>>> What does a client node look like?  It is something hosting an
>>> hbase
>>> client?  A webserver or something?
>> Yes, the client node is hosting an HBase client.
>>>> Is there any configuration parameter that can improve the
>>>> latency with
>>>> several concurrent threads and more than one HBase client node
>>>> and/or which JMX parameters should I monitor on RegionServers
>>>> to check
>>>> what may be causing this and how could I achieve better
>>>> utilization of CPU
>>>> at RegionServers?
>>>>
>>> It sounds like all your data is memory resident given its size
>>> and the
>>> lack of iowait.  Is that so?  Studying the regionserver metrics,
>>> are
>>> they fairly constant across the addition of the new client node?
>> Yes, all data is memory resident. As far as I can see, the
>> regionserver
>> metrics are
>> fairly constant.
>>
>> Thanks,
>>
>
+
Stack 2012-11-21, 05:47
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB