Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Speeding up HBase read response


+
ijanitran 2012-04-06, 15:17
+
Michael Segel 2012-04-06, 16:25
+
Jack Levin 2012-04-06, 17:14
+
ijanitran 2012-04-09, 09:29
Copy link to this message
-
Re: Speeding up HBase read response
Yes, from  %util you can see that your disks are working at 100%
pretty much.  Which means you can't push them go any faster.   So the
solution is to add more disks, add faster disks, add nodes and disks.
This type of overload should not be related to HBASE, but rather to
your hardware setup.

-Jack

On Mon, Apr 9, 2012 at 2:29 AM, ijanitran <[EMAIL PROTECTED]> wrote:
>
> Hi, results of iostat are pretty much very similar on all nodes:
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     0.00  294.00    0.00     9.27     0.00    64.54
> 21.97   75.44   3.40 100.10
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     4.00  286.00    8.00     9.11     0.27    65.33
> 7.16 25.32 2.88  84.70
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     0.00  283.00    0.00     8.29     0.00    59.99
> 10.31   35.43   2.97  84.10
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     0.00  320.00    0.00     9.12     0.00    58.38
> 12.32   39.56   2.79  89.40
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     0.00  336.63    0.00     9.18     0.00    55.84
> 10.67   31.42   2.78  93.47
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     0.00  312.00    0.00    10.00     0.00    65.62
> 11.07   35.49   2.91  90.70
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     0.00  356.00    0.00    10.72     0.00    61.66
> 9.38 26.63 2.57  91.40
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     0.00  258.00    0.00     8.20     0.00    65.05
> 13.37   51.24   3.64  93.90
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     0.00  246.00    0.00     7.31     0.00    60.88
> 5.87   24.53   3.14  77.30
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     2.00  297.00    3.00     9.11     0.02    62.29
> 13.02   42.40   3.12  93.60
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     0.00  292.00    0.00     9.60     0.00    67.32
> 11.30   39.51   3.36  98.00
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> xvdap1            0.00     4.00  261.00    8.00     7.84     0.27    61.74
> 16.07   55.72   3.39  91.30
>
>
> Jack Levin wrote:
>>
>> Please email iostat -xdm 1, run for one minute during load on each node
>> --
>> Sent from my Android phone with K-9 Mail. Please excuse my brevity.
>>
>> ijanitran <[EMAIL PROTECTED]> wrote:
>>
>>
>> I have 4 nodes HBase v0.90.4-cdh3u3 cluster deployed on Amazon XLarge
>> instances (16Gb RAM, 4 cores CPU) with 8Gb heap -Xmx allocated for HRegion
>> servers, 2Gb for datanodes. HMaster\ZK\Namenode is on the separate XLarge
>> instance. Target dataset is 100 millions records (each record is 10 fields
>> by 100 bytes). Benchmarking performed concurrently from parallel 100
>> threads.
>>
>> I'm confused with a read latency I got, comparing to what YCSB team
>> achieved
>> and showed in their YCSB paper. They achieved throughput of up to 7000
>> ops/sec with a latency of 15 ms (page 10, read latency chart). I can't get
>> throughput higher than 2000 ops/sec on 90% reads/10% writes workload.
>> Writes
>> are really fast with auto commit disabled (response within a few ms),
+
Jeff Whiting 2012-04-10, 18:03
+
Andrew Purtell 2012-04-10, 18:14
+
Otis Gospodnetic 2012-04-11, 21:31
+
Andrew Purtell 2012-04-12, 05:40
+
Michael Segel 2012-04-12, 06:04
+
Andrew Purtell 2012-04-12, 06:14
+
Michael Segel 2012-04-12, 06:21
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB