Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper, mail # user - [announce] Accord: A high-performance coordination service for write-intensive workloads


+
OZAWA Tsuyoshi 2011-09-23, 13:08
+
Andrew Purtell 2011-09-23, 16:16
+
OZAWA Tsuyoshi 2011-09-23, 16:45
+
Ted Dunning 2011-09-23, 20:52
+
Edward Capriolo 2011-09-23, 23:38
+
OZAWA Tsuyoshi 2011-09-24, 01:54
+
Ryan Rawson 2011-09-24, 02:09
+
OZAWA Tsuyoshi 2011-09-24, 05:38
+
OZAWA Tsuyoshi 2011-09-24, 01:44
+
OZAWA Tsuyoshi 2011-09-25, 08:14
+
Flavio Junqueira 2011-09-24, 21:43
+
OZAWA Tsuyoshi 2011-09-25, 07:02
+
Ted Dunning 2011-09-25, 11:14
+
Tsuyoshi OZAWA 2011-09-26, 02:18
+
Ted Dunning 2011-09-26, 11:56
+
Tsuyoshi OZAWA 2011-09-27, 02:30
Copy link to this message
-
Re: [announce] Accord: A high-performance coordination service for write-intensive workloads
Flavio Junqueira 2011-09-25, 21:49
On Sep 25, 2011, at 9:02 AM, OZAWA Tsuyoshi wrote:

> (2011/09/25 6:43), Flavio Junqueira wrote:
>> Thanks for sending this reference to the list, it sounds very
>> interesting. I have a few questions and comments, if you don't mind:
>>
>> 1- I was wondering if you can give more detail on the setup you  
>> used to
>> generate the numbers you show in the graphs on your Accord page. The
>> ZooKeeper values are way too low, and I suspect that you're using a
>> single hard drive. It could be because you expect to use a single  
>> hard
>> drive with an Accord server, and you wanted to make the comparison  
>> fair.
>> Is this correct?
>
> No, it isn't.
> Both ZooKeeper and Accord use the dedicated hard drive for logging.
> Setting file I used is here:
> https://gist.github.com/1240291
>
> Please tell me if I have a mistake.
>

I gave a cursory look, and I can't see any obvious problem. It is  
intriguing that the numbers are so low. Have you tried with different  
numbers of servers? I'm not sure if I just missed this information,  
but what version of ZooKeeper are you looking at?  Also, if it is not  
too much trouble, could you please report on your read performance?

>> 2- The previous observation leads me to the next question: could  
>> you say
>> more about your use of disk with persistence on?
>
> ZooKeeper returns ACK after writing the disks of the over half  
> machines.
> Accord returns ACK after writing the disk of just one machine, which
> accepted a request. However, at the same time, the ACK assures that  
> all
> servers receive the messages in the same order.
> The difference of the semantics means that this measurement is not  
> fair.
> I would like to measure the under fair situation, but not yet. If  
> there
> are requests from users, I'm going to implement it and measure it.  
> Note
> that the benchmark of in-memory is fair.
>

I'm not sure I understand this part. You say that an operation is  
ACKed after being written to one disk, but also that it is guaranteed  
to be delivered in the same order in all servers. Does it mean that  
Accord still replicates on other servers before ACKing but the other  
servers do not write to disk? Otherwise, the first server may crash  
and never come back, and the message cannot possibly be delivered by  
other servers.

One question related to this point: with Accord, do you replicate the  
original request message or the result of operation? Do you guarantee  
that each server executes a request or applies the result of a request  
exactly once? If not, what kind of semantics does Accord provide?

>> 3- The limitation on the message size in ZooKeeper is not a  
>> fundamental
>> limitation. We have chosen to limit for the reasons we explain in the
>> wiki page that is linked in the Accord page. Do you have any  
>> particular
>> use case in mind for which you think it would be useful to have very
>> large messages?
>
> Some developers use ZooKeeper as storage. For example, Onix  
> developer, a
> implementation of open flow switch, says that :
> "for most the object size limitations of
> Zookeeper and convenience of accessing the configuration
> state directly through the NIB are a reason to favor the
> transactional database."
> http://www.usenix.org/event/osdi10/tech/full_papers/Koponen.pdf
>

The comment in the paper is exactly right, we instruct our users to  
store metadata in ZooKeeper and data elsewhere. There are systems  
designed to store bulk data, and ZooKeeper shouldn't try to compete  
with such storage systems, it is not our goal.

>> 4- If I understand the group communication substrate Accord uses, it
>> enables Accord to process client requests in any server. ZooKeeper  
>> has a
>> leader for a few reasons, one being the ability of managing client
>> sessions. Ephemeral nodes, for example, are bound to sessions. Are  
>> there
>> similar abstractions in Accord? If the answer is positive, could you
>> explain it a bit? If not, is it doable with the substrate you're  

Good to know that you also support ephemerals. Could you say a little  
more about how you decide to eliminate an ephemeral node? I suppose  
that an ephemeral is bound to the client that created it somehow, and  
it is deleted if the client crashes or disconnects. What's the exact  
mechanism?
Sounds good, thanks.

-Flavio
flavio
junqueira

research scientist

[EMAIL PROTECTED]
direct +34 93-183-8828

avinguda diagonal 177, 8th floor, barcelona, 08018, es
phone (408) 349 3300    fax (408) 349 3301
+
Ted Dunning 2011-09-25, 22:26
+
Tsuyoshi OZAWA 2011-09-27, 02:19