Right, we're considering using only a single server for Kafka in each remote DC.  We'd run a standalone zookeeper instance on the same node as Kafka.  As long as Kafka and Zookeeper and the server still runs, Kafka should keep on working.

Yes, we'd normally MirrorMaker to consumer from the remote DCs into the main Kafka Cluster.  We'd only failover to direct cross-DC production of messages if the standalone remote Kafka Broker were to die.

Ok good to know!  I suppose there could be weirdness with unpacked requests, but this is a pretty weird setup anyway, and we'd just have to deal with duplicates.

Thanks for your response!  We're not sure if we want to go down this route yet, but it is good to know that it is an option.

-Andrew
On Nov 27, 2013, at 3:30 PM, Joel Koshy <[EMAIL PROTECTED]> wrote:

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB