Ricardo Vilaça 2012-10-10, 12:51
Stack 2012-10-11, 03:24
Mohit Anchlia 2012-10-11, 04:25
Stack 2012-11-21, 05:39
Ricardo Vilaça 2012-10-12, 10:56
Vincent Barat 2012-11-20, 18:54
Vincent Barat 2012-11-21, 18:37
On Fri, Oct 12, 2012 at 3:56 AM, Ricardo Vilaça <[EMAIL PROTECTED]> wrote:
> Yes, had already tried to increase to 200 but without improvement
> on the application latency. However, the output of the active IPC
> handlers, using the Web interface,
> is strange. For region servers I can see in a given instant at most 4
> IPC handler active but if I
> see the state of all other IPC handlers they are waiting for 0 seconds.
Where are they waiting? Want to paste a thread dump on pastebin or
something when you are seeing the phenomenon and paste a link in here?
Are all handlers doing work?
> In the master the IPC handlers are also almost all in the waiting state
> but for a few seconds.
>> Is this going from one client node (serving 400 clients) to two client
>> nodes (serving 800 clients)?
> Yes, the huge increase in latency is when going for one client node to
> two client nodes. However, increasing the number of clients in a single
> node also adds to latency but a small increase.
>> Where are you measuring from? Application side? Can you figure if we
>> are binding up in HBase or in the client node?
> This measures are from the application side. As the huge increase in
> is happening when increasing the number of clients I suspect that the
> binding up is in the
> HBase maybe due to some incorrect configuration.
What happens if 10 client instances each of ten threads doing your task list?
>> What does a client node look like? It is something hosting an hbase
>> client? A webserver or something?
> Yes, the client node is hosting an HBase client.
>>> Is there any configuration parameter that can improve the latency with
>>> several concurrent threads and more than one HBase client node
>>> and/or which JMX parameters should I monitor on RegionServers to check
>>> what may be causing this and how could I achieve better utilization of CPU
>>> at RegionServers?
>> It sounds like all your data is memory resident given its size and the
>> lack of iowait. Is that so? Studying the regionserver metrics, are
>> they fairly constant across the addition of the new client node?
> Yes, all data is memory resident. As far as I can see, the regionserver
> metrics are
> fairly constant.
You have cluster diagrams? The amount of traffic in and out of the
box is constant when you up the number of client instances from 400 to
800? You are not something silly like network bound?