Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper >> mail # user >> Getting confused with the "recipe for lock"


+
Zhao Boran 2013-01-11, 13:46
+
Andrey Stepachev 2013-01-11, 14:48
+
Hulunbier 2013-01-11, 16:10
+
Jordan Zimmerman 2013-01-11, 20:20
+
Hulunbier 2013-01-12, 10:30
+
Ben Bangert 2013-01-12, 17:39
+
Jordan Zimmerman 2013-01-13, 01:31
+
Hulunbier 2013-01-13, 15:05
+
Vitalii Tymchyshyn 2013-01-14, 10:37
+
Hulunbier 2013-01-14, 15:06
+
Vitalii Tymchyshyn 2013-01-14, 15:38
+
Ted Dunning 2013-01-14, 16:05
Copy link to this message
-
Re: Getting confused with the "recipe for lock"
Thanks Ted,

> And in general, you can't have precise distributed lock control.  There
> will always be a bit of slop.

Yes, I agree with you.

> So decide which penalty is easier to pay.  Do you want "at-most-one" or
> "at-least-one" or something in between?  You can't have "exactly-one" and
> still deal with expected problems like partition or node failure.

Yes again, I feel the same way.

IMHO, a lock(basic lock, not R/W lock) should be exclusive by nature.

*If* really there was such flaw in the recipe,  imho, they should not
claim "at any snapshot in time no two clients think they hold the same
lock" , at least with some notes; it is ... misleading.
On Tue, Jan 15, 2013 at 12:05 AM, Ted Dunning <[EMAIL PROTECTED]> wrote:
> Yes.
>
> And in general, you can't have precise distributed lock control.  There
> will always be a bit of slop.
>
> So decide which penalty is easier to pay.  Do you want "at-most-one" or
> "at-least-one" or something in between?  You can't have "exactly-one" and
> still deal with expected problems like partition or node failure.
>
>
> On Mon, Jan 14, 2013 at 7:38 AM, Vitalii Tymchyshyn <[EMAIL PROTECTED]>wrote:
>
>> There are two events: disconnected and session expired. The ephemeral nodes
>> are removed after the second one. The client  receives both. So to
>> implement "at most one lock holder" scheme, client owning lock must think
>> it've lost lock ownership since it've received disconnected event. So,
>> there is period of time between disconnect and session expired when noone
>> should have the lock. It's "safety" time to accomodate for time shifts,
>> network latencies, lock ownership recheck interval (in case when client
>> can't stop using resource immediatelly and simply checks regulary if it
>> still holds the lock).
>>
>>
>>
>> 2013/1/14 Hulunbier <[EMAIL PROTECTED]>
>>
>> > Hi Vitalii,
>> >
>> > > I don't see why clock must be in sync.
>> >
>> > I don't see any reason to precisely sync the clocks either (but if we
>> > could ... that would be wonderful.).
>> >
>> > By *some constrains of clock drift*, I mean :
>> >
>> > "Every node has a clock, and all clocks increase at the same rate"
>> > or
>> > "the server’s clock advance no faster than a known constant factor
>> > faster than the client’s.".
>> >
>> >
>> > >Also note the difference between disconnected and session
>> > > expired events. This time difference is when client knows "something's
>> > > wrong", but another client did not get a lock yet.
>> >
>> > sorry, but I failed to get your idea well; would you please give me
>> > some further explanation?
>> >
>> >
>> > On Mon, Jan 14, 2013 at 6:37 PM, Vitalii Tymchyshyn <[EMAIL PROTECTED]>
>> > wrote:
>> > > I don't see why clock must be in sync. They are counting time periods
>> > > (timeouts). Also note the difference between disconnected and session
>> > > expired events. This time difference is when client knows "something's
>> > > wrong", but another client did not get a lock yet. You will have
>> problems
>> > > if client can't react (and release resources) between this two events.
>> > >
>> > > Best regards, Vitalii Tymchyshyn
>> > >
>> > >
>> > > 2013/1/13 Hulunbier <[EMAIL PROTECTED]>
>> > >
>> > >> Thanks Jordan,
>> > >>
>> > >> > Assuming the clocks are in sync between all participants…
>> > >>
>> > >> imho, perfect clock synchronization in a distributed system is very
>> > >> hard (if it can be).
>> > >>
>> > >> > Someone with better understanding of ZK internals can correct me,
>> but
>> > >> this is my understanding.
>> > >>
>> > >> I think I might have missed some very important and subtile(or
>> > >> obvious?) points of the recipe / ZK protocol.
>> > >>
>> > >> I just can not believe that, there could be such type of a flaw in the
>> > >> lock-recipe,  for so long time,  without anybody has pointed it out.
>> > >>
>> > >> On Sun, Jan 13, 2013 at 9:31 AM, Jordan Zimmerman
>> > >> <[EMAIL PROTECTED]> wrote:
>> > >> > On Jan 12, 2013, at 2:30 AM, Hulunbier <[EMAIL PROTECTED]> wrote:
+
Hulunbier 2013-01-15, 01:52
+
Jordan Zimmerman 2013-01-15, 02:23
+
Hulunbier 2013-01-15, 03:45
+
Benjamin Reed 2013-01-15, 05:27
+
Hulunbier 2013-01-15, 06:32
+
Ted Dunning 2013-01-17, 11:43
+
Hulunbier 2013-01-18, 08:26
+
Benjamin Reed 2013-01-17, 04:28
+
Hulunbier 2013-01-17, 09:05
+
Vitalii Tymchyshyn 2013-01-27, 19:29
+
Hulunbier 2013-01-13, 14:40
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB