Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Zookeeper >> mail # user >> Re: need for more conditional write support

Copy link to this message
Re: need for more conditional write support
Based on the discussion, it doesn't look like create/delete actions would be
considered under this model.  How difficult would it be to extend the api to
allow creation/deletion of nodes?  I think the hardest part would be to
verify the 'correctness' of the update.  Are there other complications?


On Tue, Dec 21, 2010 at 1:57 AM, Benjamin Reed <[EMAIL PROTECTED]> wrote:

> keeping the aggregate size to the normal max i think helps things a lot. we
> don't have to worry about a big update slowing everything down.
> to implement this we probably need to add a new request and a new
> transaction. then you will get the atomic update property that you are
> looking for and you will not need to worry about special queue management.
> ben
> On 12/20/2010 10:08 PM, Ted Dunning wrote:
>> On Mon, Dec 20, 2010 at 9:24 PM, Benjamin Reed<[EMAIL PROTECTED]>
>>  wrote:
>>  are you guys going to put a limit on the size of the updates? can someone
>>> do an update over 50 znodes where data value is 500K, for example?
>>>  Yes.  My plan is to put a limit on the aggregate size of all of the
>> updates
>> that is equal to the limit that gets put on a single update normally.
>>  if there is a failure during the update, is it okay for just a subset of
>>> the znodes to be updated?
>>>  That would be an unpleasant alternative.
>> My thought was to convert all of the updates to idempotent form and add
>> them
>> all to the queue or fail all the updates.
>> My hope was that there would be some way to mark the batch in the queue so
>> that they stay together when commits are pushed out to the cluster.  It
>> might be necessary to flush the queue before inserting the batched
>> updates.
>>  Presumably something like this needs to be done now (if queue + current
>> transaction is too large, flush queue first).
>> Are there failure modes that would leave part of the queue committed and
>> part not?