Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Hbase Read/Write throughput measure


Copy link to this message
-
Re: Hbase Read/Write throughput measure
Hello Dalia,

          I think the easiest way to measure the read/write throughput is
to use "PerformanceEvaluation" tool that comes with the Hbase
distribution. It spawns a map-reduce job to do the reads/writes in
parallel. Apart from this there are several other ways to benchmark your
Hbase cluster like YSCB <https://github.com/brianfrankcooper/YCSB>. You
might find this
link<http://wiki.apache.org/hadoop/Hbase/PerformanceEvaluation> useful
that talks about Hbase performance.

HTH

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Fri, Feb 1, 2013 at 5:41 PM, Dalia Sobhy <[EMAIL PROTECTED]>wrote:

>
> Dear all,
>
> I want to measure the read/write throughput for a code on a cluster of 10
> nodes. So is there any code or way to measure it?
>
> I have seen in a cloudera-based presentation that hbase read/write
> throughput = millions queries per second.
>
> So any help please??
>
> Thanks
>
>
> Best Regards,
> Dalia Sobhy
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB