Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # dev >> HBase read perfomnance and HBase client


+
Vladimir Rodionov 2013-07-30, 18:23
+
Ted Yu 2013-07-30, 18:25
+
Stack 2013-07-30, 19:32
+
lars hofhansl 2013-07-30, 20:14
+
Jean-Daniel Cryans 2013-07-30, 19:06
+
Vladimir Rodionov 2013-07-30, 19:16
+
Jean-Daniel Cryans 2013-07-30, 19:31
+
Vladimir Rodionov 2013-07-30, 20:15
+
Jean-Daniel Cryans 2013-07-30, 20:35
+
Vladimir Rodionov 2013-07-30, 20:52
+
Vladimir Rodionov 2013-07-30, 20:58
+
Ted Yu 2013-07-30, 21:01
+
Vladimir Rodionov 2013-07-30, 20:17
+
Vladimir Rodionov 2013-07-30, 20:22
+
Vladimir Rodionov 2013-07-30, 20:30
Copy link to this message
-
Re: HBase read perfomnance and HBase client
With Nagle's you'd see something around 40ms. You are not saying 0.8ms RTT is bad, right? Are you seeing ~40ms latencies?

This thread has gotten confusing.

I would try these:
* one Configuration for all tables. Or even use a single HConnection/Threadpool and use the HTable(byte[], HConnection, ExecutorService) constructor
* disable Nagle's: set both ipc.server.tcpnodelay and hbase.ipc.client.tcpnodelay to true in hbase-site.xml (both client *and* server)
* increase hbase.client.ipc.pool.size in client's hbase-site.xml
* enable short circuit reads (details depend on exact version of Hadoop). Google will help :)

-- Lars
----- Original Message -----
From: Vladimir Rodionov <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc:
Sent: Tuesday, July 30, 2013 1:30 PM
Subject: Re: HBase read perfomnance and HBase client

This hbase.ipc.client.tcpnodelay (default - false) explains poor single
thread performance and high latency ( 0.8ms in local network)?
On Tue, Jul 30, 2013 at 1:22 PM, Vladimir Rodionov
<[EMAIL PROTECTED]>wrote:

> One more observation: One Configuration instance per HTable gives 50%
> boost as compared to single Configuration object for all HTable's - from
> 20K to 30K
>
>
> On Tue, Jul 30, 2013 at 1:17 PM, Vladimir Rodionov <[EMAIL PROTECTED]
> > wrote:
>
>> This thread dump has been taken when client was sending 60 requests in
>> parallel (at least, in theory). There are 50 server handler threads.
>>
>>
>> On Tue, Jul 30, 2013 at 1:15 PM, Vladimir Rodionov <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Sure, here it is:
>>>
>>> http://pastebin.com/8TjyrKRT
>>>
>>> epoll is not only to read/write HDFS but to connect/listen to clients as
>>> well?
>>>
>>>
>>> On Tue, Jul 30, 2013 at 12:31 PM, Jean-Daniel Cryans <
>>> [EMAIL PROTECTED]> wrote:
>>>
>>>> Can you show us what the thread dump looks like when the threads are
>>>> BLOCKED? There aren't that many locks on the read path when reading
>>>> out of the block cache, and epoll would only happen if you need to hit
>>>> HDFS, which you're saying is not happening.
>>>>
>>>> J-D
>>>>
>>>> On Tue, Jul 30, 2013 at 12:16 PM, Vladimir Rodionov
>>>> <[EMAIL PROTECTED]> wrote:
>>>> > I am hitting data in a block cache, of course. The data set is very
>>>> small
>>>> > to fit comfortably into block cache and all request are directed to
>>>> the
>>>> > same Region to guarantee single RS testing.
>>>> >
>>>> > To Ted:
>>>> >
>>>> > Yes, its CDH 4.3 . What the difference between 94.10 and 94.6 with
>>>> respect
>>>> > to read performance?
>>>> >
>>>> >
>>>> > On Tue, Jul 30, 2013 at 12:06 PM, Jean-Daniel Cryans <
>>>> [EMAIL PROTECTED]>wrote:
>>>> >
>>>> >> That's a tough one.
>>>> >>
>>>> >> One thing that comes to mind is socket reuse. It used to come up more
>>>> >> more often but this is an issue that people hit when doing loads of
>>>> >> random reads. Try enabling tcp_tw_recycle but I'm not guaranteeing
>>>> >> anything :)
>>>> >>
>>>> >> Also if you _just_ want to saturate something, be it CPU or network,
>>>> >> wouldn't it be better to hit data only in the block cache? This way
>>>> it
>>>> >> has the lowest overhead?
>>>> >>
>>>> >> Last thing I wanted to mention is that yes, the client doesn't scale
>>>> >> very well. I would suggest you give the asynchbase client a run.
>>>> >>
>>>> >> J-D
>>>> >>
>>>> >> On Tue, Jul 30, 2013 at 11:23 AM, Vladimir Rodionov
>>>> >> <[EMAIL PROTECTED]> wrote:
>>>> >> > I have been doing quite extensive testing of different read
>>>> scenarios:
>>>> >> >
>>>> >> > 1. blockcache disabled/enabled
>>>> >> > 2. data is local/remote (no good hdfs locality)
>>>> >> >
>>>> >> > and it turned out that that I can not saturate 1 RS using one
>>>> >> (comparable in CPU power and RAM) client host:
>>>> >> >
>>>> >> >  I am running client app with 60 read threads active (with
>>>> multi-get)
>>>> >> that is going to one particular RS and
>>>> >> > this RS's load is 100 -150% (out of 3200% available) - it means
>>>> that
>
+
Vladimir Rodionov 2013-07-30, 21:01
+
Vladimir Rodionov 2013-08-01, 02:27
+
lars hofhansl 2013-08-01, 04:33
+
Vladimir Rodionov 2013-08-01, 04:57
+
lars hofhansl 2013-08-01, 06:15
+
Varun Sharma 2013-08-01, 06:37
+
Vladimir Rodionov 2013-08-01, 16:24
+
Ted Yu 2013-08-01, 16:27
+
Vladimir Rodionov 2013-08-01, 17:11
+
Michael Segel 2013-08-01, 17:27
+
Vladimir Rodionov 2013-08-01, 18:10
+
Michael Segel 2013-08-01, 19:10
+
Vladimir Rodionov 2013-08-01, 20:25
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB