Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper >> mail # user >> Zoo Keep widely distributed?

wayne.rasmuss 2013-08-26, 21:02
Martin Kou 2013-08-27, 06:19
Copy link to this message
RE: Zoo Keep widely distributed?
I assume that when you say across LANs you mean different colos, although I
suppose what I'm about to say holds even if the assumption is incorrect. The
overall performance depends on your read:write ratio. Since reads are local
to a server, they don't cross the boundaries of a colo.  You could also
consider using observers to avoid the penalty of coordinating across colos
for writes to the zookeeper state.

Consider reading this blog post by Camille Fournier:



> -----Original Message-----
> From: Martin Kou [mailto:[EMAIL PROTECTED]]
> Sent: 27 August 2013 07:19
> Subject: Re: Zoo Keep widely distributed?
> The latency will make your writes really slow - since each write operation
> would need to be confirmed by more than half of your total number of
> servers in the cluster to succeed. Writes to ZooKeeper are also
serialized, so
> you can't parallelize the writes in a single cluster either - so your
> will also be low.
> You can still get high throughput despite the high latency if you can
> your writes into multiple clusters though.
> Best Regards,
> Martin Kou
> On Mon, Aug 26, 2013 at 2:02 PM, wayne.rasmuss <
> > I think zoo keeper looks very handy, but I would like to have a pretty
> > good idea if it can work well/at all across different LANs. I would
> > expect to have to establish a way for the members of the ensemble to
> > talk to each other, but having to open a bunch of ports to multiple
> > hosts would be a show stopper. Also, I'm wondering how the latency
> > will effect overall performance.
> >
> >
> >
> > --
> > View this message in context:
> > http://zookeeper-user.578899.n2.nabble.com/Zoo-Keep-widely-
> distributed
> > -tp7579027.html Sent from the zookeeper-user mailing list archive at
> > Nabble.com.
> >