Josh Stone 2012-01-05, 04:09
Inder Pall 2012-01-05, 05:15
Jordan Zimmerman 2012-01-05, 06:46
Josh Stone 2012-01-05, 08:04
Jordan Zimmerman 2012-01-05, 08:11
Ted Dunning 2012-01-05, 08:50
Jordan Zimmerman 2012-01-05, 08:55
Inder Pall 2012-01-05, 13:00
Jordan Zimmerman 2012-01-05, 17:41
Josh Stone 2012-01-05, 17:56
Jordan Zimmerman 2012-01-05, 18:02
Josh Stone 2012-01-05, 18:54
Jordan Zimmerman 2012-01-05, 18:59
Ted Dunning 2012-01-05, 22:05
Josh Stone 2012-01-05, 17:38
Jordan Zimmerman 2012-01-12, 17:52
-Re: Use cases for ZooKeeper
Ted Dunning 2012-01-13, 02:18
I think I have a bit of it written already.
It doesn't use Curator and I think you could simplify it substantially if
you were to use it. Would that help?
On Thu, Jan 12, 2012 at 12:52 PM, Jordan Zimmerman
> Ted - are you interested in writing this on top of Curator? If not, I'll
> give it a whack.
> On 1/5/12 12:50 AM, "Ted Dunning" <[EMAIL PROTECTED]> wrote:
> >Jordan, I don't think that leader election does what Josh wants.
> >I don't think that consistent hashing is particularly good for that either
> >because the loss of one node causes the sequential state for lots of
> >entities to move even among nodes that did not fail.
> >What I would recommend is a variant of micro-sharding. The key space is
> >divided into many micro-shards. Then nodes that are alive claim the
> >micro-shards using ephemerals and proceed as Josh described. On loss of a
> >node, the shards that node was handling should be claimed by the remaining
> >nodes. When a new node appears or new work appears, it is helpful to
> >direct nodes to effect a hand-off of traffic.
> >In my experience, the best way to implement shard balancing is with and
> >external master instance much in the style of hbase or katta. This
> >external master can be exceedingly simple and only needs to wake up on
> >various events like loss of a node or change in the set of live shards.
> >can also wake up at intervals if desired to backstop the normal
> >notifications or to allow small changes for certain kinds of balancing.
> > Typically, this only requires a few hundred lines of code.
> >This external master can, of course, be run on multiple nodes and which
> >master is in current control can be adjudicated with yet another leader
> >You can view this as a package of many leader elections. Or as
> >consistent hashing. The distinctions are a bit subtle but are very
> >important. These include,
> >- there is a clean division of control between the master which determines
> >who serves what and the nodes that do the serving
> >- there is no herd effect because the master drives the assignments
> >- node loss causes the minimum amount of change of assignments since no
> >assignments to surviving nodes are disturbed. This is a major win.
> >- balancing is pretty good because there are many shards compared to the
> >number of nodes.
> >- the balancing strategy is highly pluggable.
> >This pattern would make a nice addition to Curator, actually. It comes up
> >repeatedly in different contexts.
> >On Thu, Jan 5, 2012 at 12:11 AM, Jordan Zimmerman
> ><[EMAIL PROTECTED]>wrote:
> >> OK - so this is two options for doing the same thing. You use a Leader
> >> Election algorithm to make sure that only one node in the cluster is
> >> operating on a work unit. Curator has an implementation (it's really
> >> a distributed lock with a slightly different API).
> >> -JZ
> >> On 1/5/12 12:04 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:
> >> >Thanks for the response. Comments below:
> >> >
> >> >On Wed, Jan 4, 2012 at 10:46 PM, Jordan Zimmerman
> >> ><[EMAIL PROTECTED]>wrote:
> >> >
> >> >> Hi Josh,
> >> >>
> >> >> >Second use case: Distributed locking
> >> >> This is one of the most common uses of ZooKeeper. There are many
> >> >> implementations - one included with the ZK distro. Also, there is
> >> >>Curator:
> >> >> https://github.com/Netflix/curator
> >> >>
> >> >> >First use case: Distributing work to a cluster of nodes
> >> >> This sounds feasible. If you give more details I and others on this
> >> >> can help more.
> >> >>
> >> >
> >> >Sure. I basically want to handle race conditions where two commands
> >> >operate on the same data are received by my cluster of znodes,
> >> >concurrently. One approach is to lock on the data that is effected by
> >> >command (distributed lock). Another approach is make sure that all of
Jordan Zimmerman 2012-01-13, 04:01