Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper, mail # user - Re: need for more conditional write support


+
Dave Wright 2010-12-16, 16:16
+
Ted Dunning 2010-12-16, 18:01
+
Dave Wright 2010-12-16, 18:06
+
Jared Cantwell 2010-12-16, 18:39
+
Henry Robinson 2010-12-16, 19:04
+
Ted Dunning 2010-12-16, 19:21
+
Ted Dunning 2010-12-16, 19:22
+
Ted Dunning 2010-12-16, 19:23
+
Ted Dunning 2010-12-16, 19:25
+
Qian Ye 2010-12-21, 02:56
+
Ted Dunning 2010-12-21, 03:22
+
Benjamin Reed 2010-12-21, 05:24
+
Benjamin Reed 2010-12-21, 05:24
+
Ted Dunning 2010-12-21, 06:08
+
Benjamin Reed 2010-12-21, 06:57
Copy link to this message
-
Re: need for more conditional write support
Jared Cantwell 2010-12-21, 21:16
Based on the discussion, it doesn't look like create/delete actions would be
considered under this model.  How difficult would it be to extend the api to
allow creation/deletion of nodes?  I think the hardest part would be to
verify the 'correctness' of the update.  Are there other complications?

~Jared

On Tue, Dec 21, 2010 at 1:57 AM, Benjamin Reed <[EMAIL PROTECTED]> wrote:

> keeping the aggregate size to the normal max i think helps things a lot. we
> don't have to worry about a big update slowing everything down.
>
> to implement this we probably need to add a new request and a new
> transaction. then you will get the atomic update property that you are
> looking for and you will not need to worry about special queue management.
>
> ben
>
>
> On 12/20/2010 10:08 PM, Ted Dunning wrote:
>
>> On Mon, Dec 20, 2010 at 9:24 PM, Benjamin Reed<[EMAIL PROTECTED]>
>>  wrote:
>>
>>  are you guys going to put a limit on the size of the updates? can someone
>>> do an update over 50 znodes where data value is 500K, for example?
>>>
>>>  Yes.  My plan is to put a limit on the aggregate size of all of the
>> updates
>> that is equal to the limit that gets put on a single update normally.
>>
>>
>>  if there is a failure during the update, is it okay for just a subset of
>>> the znodes to be updated?
>>>
>>>  That would be an unpleasant alternative.
>>
>> My thought was to convert all of the updates to idempotent form and add
>> them
>> all to the queue or fail all the updates.
>>
>> My hope was that there would be some way to mark the batch in the queue so
>> that they stay together when commits are pushed out to the cluster.  It
>> might be necessary to flush the queue before inserting the batched
>> updates.
>>  Presumably something like this needs to be done now (if queue + current
>> transaction is too large, flush queue first).
>>
>> Are there failure modes that would leave part of the queue committed and
>> part not?
>>
>
>
+
Henry Robinson 2010-12-21, 21:30
+
Ted Dunning 2010-12-21, 21:48
+
Jared Cantwell 2010-12-22, 01:44
+
Ted Dunning 2010-12-22, 02:06
+
Henry Robinson 2010-12-22, 09:00
+
Ted Dunning 2010-12-22, 18:29