Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper >> mail # dev >> Where are we in ZOOKEEPER-1416

陈迪豪 2014-01-17, 03:12
Flavio Junqueira 2014-01-17, 12:12
陈迪豪 2014-01-17, 13:02
FPJ 2014-01-17, 13:18
kishore g 2014-01-17, 16:05
FPJ 2014-01-17, 16:15
kishore g 2014-01-17, 17:13
Ted Yu 2014-01-17, 17:17
Flavio Junqueira 2014-01-17, 19:17
Ted Yu 2014-01-17, 19:25
Ted Yu 2014-01-17, 20:34
kishore g 2014-01-17, 22:10
Ted Dunning 2014-01-17, 22:14
Ted Dunning 2014-01-17, 22:41
陈迪豪 2014-01-18, 04:33
Ted Yu 2014-01-17, 22:54
Stack 2014-01-18, 16:58
Ted Dunning 2014-01-19, 00:05
Ted Yu 2014-01-19, 01:11
Thawan Kooburat 2014-01-25, 02:49
Copy link to this message
RE: Where are we in ZOOKEEPER-1416

Thanks for the replay.
Do you mean we can only read all the changes when
merging the snapshots? That may be not good enough for
HBase because we would like to know all the assignment
states when HMaster get the notification from zk.

We know the zk pattern and the way to use it. But now zk
is not suitable for "state-machine" applications, right?
From: Thawan Kooburat [[EMAIL PROTECTED]]
Sent: Saturday, January 25, 2014 10:49 AM
Subject: Re: Where are we in ZOOKEEPER-1416

Sorry for responding very late to this thread. Seem like it is already
settle in some way, so feel free to ignore. I just need add some more
detail about this JIRA given that I am its owner now.

The main motivation behind ZK-1416 is to reduce memory footprint required
to maintain large number of watches on both the client and the server-side
for our internal service discovery. The secondary goal is to simplify the
user cases where client have to subscribe too all the data under a given
subtree.  We have a working implementation in our branch but we haven't
used in production yet. The memory issue was partially solved by
ZOOKEEPER-1177. And for the second part, existing watch API can be used
but it take us a while to get rid of all corner cases.
Our service discovery system use deltas to make it efficient to publish
information to clients. So we store delta as a new znode and periodically
merging into a snapshot. If our client fall to far behind, it can always
read snapshot. If you are required to see all the deltas you may
purge/merge them only when they are consumed.
Thawan Kooburat 2014-01-28, 05:29