Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper >> mail # dev >> Performance measurement for ZooKeeper 3.5.0


+
Thawan Kooburat 2013-01-11, 00:30
+
Flavio Junqueira 2013-01-11, 09:06
Copy link to this message
-
Re: Performance measurement for ZooKeeper 3.5.0
In short, I believe is it because write request is blocking the entire
pipeline, so read request cannot go through.
We are planing to work on ZOOKEEPER-1609 to fix this problem

--
Thawan Kooburat

On 1/11/13 1:06 AM, "Flavio Junqueira" <[EMAIL PROTECTED]> wrote:

>Thanks a lot for sharing the results, Thawan. The 100% read value seems
>pretty cool, but there is really sharp drop with 1% writes, more than I
>expected. Is it because you're using a single disk? Any idea?
>
>-Flavio
>
>On Jan 11, 2013, at 12:30 AM, Thawan Kooburat <[EMAIL PROTECTED]> wrote:
>
>> Hi folks,
>>
>> As promised, below is the performance measurement of 3.5.0 branch  with
>>and without NIO(ZOOKEEPER-1504) and CommitProcessor(ZOOKEEPER-1505)
>>
>>
>>-------------------------------------------------------------------------
>>-----------
>>
>> The experiment is similar to
>>https://cwiki.apache.org/confluence/display/ZOOKEEPER/Performance with
>>the following environment changes
>>
>> Hardware:
>> CPU: Intel Xeon E5-2670 (16 cores)
>> RAM: 16 G
>> Disk: Single SATA-300 7200 rpm drive
>> Network: 10Ge interface,  all machines are within the same cluster
>>(ping < 0.2 ms)
>>
>> Server Configuration:
>> Participants:  5 machines
>> Zookeeper:   tickTime=10000 (the rest is default, leader serve client
>>request)
>> JVM params: -Xmx12g -Dzookeeper.globalOutstandingLimit=20000
>>-XX:+UseMembar -XX:+UseConcMarkSweepGC  -Djute.maxbuffer=4194304
>>
>> Client Workload:
>> - 900 client sessions ( on 30 physical machines)
>> - Perform synchronous read or write to a random znode with no delay (1K
>>in size,  out of total 20K znodes)
>>
>> Experiment Result:
>> The number reported is the combined request per seconds that all
>>clients made per seconds.
>> The number is captured after the experiment run for at least 1 minutes.
>>The error is about 1-2 %.
>> So the result shows that ZK-1504 and ZK-1505 double the read throughput
>>with no performance impact on write throughput.
>>
>> Pre NIO, CommitProcessor  (R1415847)
>> 100% read                  438119 rps
>> 99% read 1% write     47545 rps
>> 50% read 50% write      23330 rps
>> 0% read 100% write      17845 rps
>>
>> After NIO, CommitProcessor  (R1423990)
>> 100% read                  950196 rps
>> 99% read 1% write     51529 rps
>> 50% read 50% write      23662 rps
>> 0% read 100% write      17539 rps
>>
>>
>> --
>> Thawan Kooburat
>
+
Flavio Junqueira 2013-01-12, 09:24
+
Thawan Kooburat 2013-01-13, 00:11