Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Replication not suited for intensive write applications?


Copy link to this message
-
Re: Replication not suited for intensive write applications?
Thanks for the answer!
My responses are inline.

On Thu, Jun 20, 2013 at 11:02 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:

> First off, this is a pretty constructed case leading to a specious general
> conclusion.
>
> If you only have three RSs/DNs and the default replication factor of 3,
> each machine will get every single write.
> That is the first issue. Using HBase makes little sense with such a small
> cluster.
>
You are correct, non the less - network as I measured, was far from its
capacity thus probably not the bottleneck.

>
> Secondly, as you say yourself, there are only three regionservers writing
> to the replicated cluster using a single thread each in order to preserve
> ordering.
> With more region servers your scale will tip the other way. Again more
> regionservers will make this better.
>
> I presume, in production, I will add more region servers to accommodate
growing write demand on my cluster. Hence, my clients will write with more
threads. Thus proportionally I will always have a lot more client threads
than the number of region servers (each has one replication thread). So, I
don't see how adding more region servers will tip the scale to other side.
The only way to avoid this, is to design the cluster in such a way that if
I can handle the events received at the client which write them to HBase
with x Threads, this is the amount of region servers I should have. If I
will have a spike, then it will even out eventually, but this under
utilizing my cluster hardware, no?
> As for your other question, more threads can lead to better interleaving
> of CPU and IO, thus leading to better throughput (this relationship is not
> linear, though).
>
>

>
> -- Lars
>
>
>
> ----- Original Message -----
> From: Asaf Mesika <[EMAIL PROTECTED]>
> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Cc:
> Sent: Thursday, June 20, 2013 3:46 AM
> Subject: Replication not suited for intensive write applications?
>
> Hi,
>
> I've been conducting lots of benchmarks to test the maximum throughput of
> replication in HBase.
>
> I've come to the conclusion that HBase replication is not suited for write
> intensive application. I hope that people here can show me where I'm wrong.
>
> *My setup*
> *Cluster (*Master and slave are alike)
> 1 Master, NameNode
> 3 RS, Data Node
>
> All computers are the same: 8 Cores x 3.4 GHz, 8 GB Ram, 1 Gigabit ethernet
> card
>
> I insert data into HBase from a java process (client) reading files from
> disk, running on the machine running the HBase Master in the master
> cluster.
>
> *Benchmark Results*
> When the client writes with 10 Threads, then the master cluster writes at
> 17 MB/sec, while the replicated cluster writes at 12 Mb/sec. The data size
> I wrote is 15 GB, all Puts, to two different tables.
> Both clusters when tested independently without replication, achieved write
> throughput of 17-19 MB/sec, so evidently the replication process is the
> bottleneck.
>
> I also tested connectivity between the two clusters using "netcat" and
> achieved 111 MB/sec.
> I've checked the usage of the network cards both on the client, master
> cluster region server and slave region servers. No computer when over
> 30mb/sec in Receive or Transmit.
> The way I checked was rather crud but works: I've run "netstat -ie" before
> HBase in the master cluster starts writing and after it finishes. The same
> was done on the replicated cluster (when the replication started and
> finished). I can tell the amount of bytes Received and Transmitted and I
> know that duration each cluster worked, thus I can calculate the
> throughput.
>
> *The bottleneck in my opinion*
> Since we've excluded network capacity, and each cluster works at faster
> rate independently, all is left is the replication process.
> My client writes to the master cluster with 10 Threads, and manages to
> write at 17-18 MB/sec.
> Each region server has only 1 thread responsible for transmitting the data
> written to the WAL to the slave cluster. Thus in my setup I effectively
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB