Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper >> mail # user >> session watches


+
Shelley, Ryan 2012-03-05, 23:59
+
Ted Dunning 2012-03-06, 00:37
+
Shelley, Ryan 2012-03-06, 01:02
Copy link to this message
-
Re: session watches
Ryan,

Lots of assumptions here.

On Mon, Mar 5, 2012 at 5:02 PM, Shelley, Ryan <[EMAIL PROTECTED]>wrote:

> The reason I was thinking that it might be useful to know if there are any
> watches on a node is for the lack of ephemeral parent nodes. If a node
> doesn't have a watch on it, I can assume no one is watching to see if
> should be purged when it has no children, so a new watch should be set on
> it.
Well and good, but this is better done using a simple leader election.
> I don't, however, want to have multiple watches set on the same node
> that all watch when a node has no children.
As you like.
> I've experienced some
> inconsistencies with that approach as two nodes will be notified that the
> children have changed, they both check existence to make sure it's still
> there and not already deleted by some other watcher, both get back
> successful responses, both delete, one fails. It's an edge case I can
> catch and replicate easily, but in reality, it's possible that with this
> approach I could have a large number of clients watching the same znode
> resulting in lots of overhead across the network when a znode's watch is
> fired.
Having both of these programs do a test and delete invites race conditions
and is a bad way to code this.  Better is to do the delete and check the
return status (no-node means we lost the race).  You could also use a multi
containing a check and a delete atomically, but you don't really get more
information from that.  If you use a leader election, then you can
guarantee only one process will try the delete, but I don't see a gain from
that.

If I can check and see that there are X other watches on this node,
> I don't need to register another watch, one of the other instances should
> hopefully be able to handle the job even if two of the other clients fail.
>

Yeah... but this is error prone since you wouldn't get notified if the
number of watches drops to zero.  Use a leader election and you will get
notified.

I'm sure I can make this happen by setting a counter on the znode itself
> that I increment when I also watch the znode or they create ephemeral
> znodes that represent their associated watch, just seems inefficient and
> error prone (in the former case, two clients could try to set this value
> at the same time, overwriting each other since I don't have something like
> Mongo's atomic increment option - nor am I advocating it).
>

Atomic updates are very easy in ZK.  Simply the version number in the
update that you got in the read.  If the version number matches, then the
update succeeds, if you get a version mismatch, then you have an update
collision and should retry.

But again, a counter is a poor way to do this.
> I can definitely run another service that's sole responsibility is to
> clean up empty znodes (an explicit cleaner of persistent nodes acting as
> "ephemeral parents"), but in my use case, there could be thousands of
> these znodes. I was just concerned about a single point of failure, with
> that approach. Of course, I can run a couple of those in parallel all
> watching different sets of znodes, it's just added complexity. If I can't
> avoid it, I can't, but I'm just trying to exhaust options first.
>

There are some really pretty simple approaches available here.  Look at
some examples of ZK usage.
+
Shelley, Ryan 2012-03-06, 17:50
+
Ted Dunning 2012-03-06, 17:53
+
Ted Dunning 2012-03-06, 00:34
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB