Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Speeding up HBase read response


Copy link to this message
-
Re: Speeding up HBase read response
Hi Andy,

This email must have caught attention of a number of people...
You mention "Linux AMI (2012.03.1)", but which AMI is that?  Is this some specific AMI prepared by Amazon?  Or some AMI that somebody like Cloudera prepared?  Or are you saying it's just "some Linux" AMI that somebody built on 2012-03-01 and that you found in AWS?

Could you please share the outputs of:

$ cat /etc/*release
$ uname -a

$ df -T

Also, could it be that your old EC2 instance was unlucky and had a very noisy neighbour, while the new EC2 instance does not?  Not sure how one could run tests to get around this - perhaps by terminating the instance and restarting it a few times in order to get it hosted on different physical hosts?

Thanks,
Otis 
----
Performance Monitoring SaaS for HBase - http://sematext.com/spm/hbase-performance-monitoring/index.html

>________________________________
> From: Andrew Purtell <[EMAIL PROTECTED]>
>To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
>Cc: Jack Levin <[EMAIL PROTECTED]>; "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
>Sent: Tuesday, April 10, 2012 2:14 PM
>Subject: Re: Speeding up HBase read response
>
>What AMI are you using as your base?
>
>I recently started using the new Linux AMI (2012.03.1) and noticed what looks like significant improvement over what I had been using before (2011.02 IIRC). I ran four simple tests repeated three times with FIO: a read bandwidth test, a write bandwidth test, a read IOPS test, and a write IOPS test. The write IOPS test was inconclusive but for the others there was a consistent difference: reduced disk op latency (shorter tail) and increased device bandwidth. I don't run anything in production in EC2 so this was the extent of my curiosity.
>
>
>Best regards,
>
>    - Andy
>
>Problems worthy of attack prove their worth by hitting back. - Piet Hein (via Tom White)
>
>
>
>----- Original Message -----
>> From: Jeff Whiting <[EMAIL PROTECTED]>
>> To: [EMAIL PROTECTED]
>> Cc: Jack Levin <[EMAIL PROTECTED]>; [EMAIL PROTECTED]
>> Sent: Tuesday, April 10, 2012 11:03 AM
>> Subject: Re: Speeding up HBase read response
>>
>> Do you have bloom filters enabled?  And compression?  Both of those can help
>> reduce disk io load
>> which seems to be the main issue you are having on the ec2 cluster.
>>
>> ~Jeff
>>
>> On 4/9/2012 8:28 AM, Jack Levin wrote:
>>>  Yes, from  %util you can see that your disks are working at 100%
>>>  pretty much.  Which means you can't push them go any faster.   So the
>>>  solution is to add more disks, add faster disks, add nodes and disks.
>>>  This type of overload should not be related to HBASE, but rather to
>>>  your hardware setup.
>>>
>>>  -Jack
>>>
>>>  On Mon, Apr 9, 2012 at 2:29 AM, ijanitran<[EMAIL PROTECTED]>  wrote:
>>>>  Hi, results of iostat are pretty much very similar on all nodes:
>>>>
>>>>  Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
>> avgrq-sz
>>>>  avgqu-sz   await  svctm  %util
>>>>  xvdap1            0.00     0.00  294.00    0.00     9.27     0.00   
>> 64.54
>>>>  21.97   75.44   3.40 100.10
>>>>
>>>>  Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
>> avgrq-sz
>>>>  avgqu-sz   await  svctm  %util
>>>>  xvdap1            0.00     4.00  286.00    8.00     9.11     0.27   
>> 65.33
>>>>  7.16 25.32 2.88  84.70
>>>>
>>>>  Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
>> avgrq-sz
>>>>  avgqu-sz   await  svctm  %util
>>>>  xvdap1            0.00     0.00  283.00    0.00     8.29     0.00   
>> 59.99
>>>>  10.31   35.43   2.97  84.10
>>>>
>>>>  Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
>> avgrq-sz
>>>>  avgqu-sz   await  svctm  %util
>>>>  xvdap1            0.00     0.00  320.00    0.00     9.12     0.00   
>> 58.38
>>>>  12.32   39.56   2.79  89.40
>>>>
>>>>  Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
>> avgrq-sz
>>>>  avgqu-sz   await  svctm  %util
>>>>  xvdap1            0.00     0.00  336.63    0.00     9.18     0.00   
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB