Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper >> mail # user >> High availability backend services via zookeeper or TCP load balancer


+
howard chen 2013-02-26, 17:36
+
Jordan Zimmerman 2013-02-26, 18:27
Copy link to this message
-
Re: High availability backend services via zookeeper or TCP load balancer
You can definitely use ZK for this, as Jordan said. I would really question
whether writing client-side code to do this vs using something that is
really designed for writing load balancers (like haproxy) wouldn't be a
better way to do it however. It doesn't sound like you are creating
long-lived connections between these clients and services, and instead just
want to send a request to an ip address that corresponds to the LB for that
request. Your client-side code is probably going to be buggier and the
setup/maintenance more complex than if you use a simple load balancer. If
you're already using ZK for a lot of other things and it is really baked in
to all your clients, maybe this is the easiest thing to do, but I wouldn't
use ZK just for this purpose.

C
On Tue, Feb 26, 2013 at 1:27 PM, Jordan Zimmerman <
[EMAIL PROTECTED]> wrote:

> Service Discovery is a good use-case for ZooKeeper. FYI - Curator has an
> implementation of this already:
>
>         https://github.com/Netflix/curator/wiki/Service-Discovery
>
> -Jordan
>
> On Feb 26, 2013, at 9:36 AM, howard chen <[EMAIL PROTECTED]> wrote:
>
> > Hi, I am new to ZK and pls forgive me my question below is stupid :)
> >
> > We have custom written servers (not public facing, only called by our
> > internal system) which is distributed (TCP based, share nothing) that is
> > currently in AWS and with the help of ELB TCP based load balancing, it is
> > somehow fault-tolerant and we are happy with that.
> >
> > Now, we need to move off from AWS to save cost as our traffic grow.
> >
> > The problem is, now we need to maintain our own load balancers and we
> need
> > to make it fault-tolerant (unlike ELB is built-in), the
> > expected technologies would be haproxy, keepalived.
> >
> > While I am thinking this setup, I am thinking why not use ZK instead? Why
> > not maintain the currently available servers list in ZK, my initial
> > algorithms for the internal clients would be:
> >
> > 1. Get the latest server list from ZK
> > 2. Hash the server list and pick one of the backend (load balancing part)
> > 3. Call it
> > 4. If it fail, update the ZK and increment the error count
> > 5. If the error count reached a threshold and remove the backend from the
> > server list
> > 6. So the other clients would not see the backend with error
> > 7. Flush the error count so the backend would have a chance to active
> again
> >
> > Is my algorithm above valid? Any caveat when using with ZK?
> >
> > Looking for your comment, thanks.
>
>
+
howard chen 2013-02-27, 09:43
+
kishore g 2013-02-27, 16:01
+
Ted Dunning 2013-02-26, 20:55
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB