Hi Andrew,
This should be fine - if your remote DC Kafka broker goes down, the
producer should re-issue metadata requests through the load balancer
which (based on my understanding of your topology) should then go to
the main DC's Kafka cluster.  The producer will then establish
connections to the main DC's brokers for subsequent sends. (I recall
from earlier in the list that you are using librdkafka - it should be
doing something similar though.)

I'm a bit unclear on your set up - by non-HA broker do you mean non-HA
by virtue of it being a single broker with no replication?  You would
still need to get it registered in a ZooKeeper cluster right?  Also,
where are the events going to be ultimately consumed? I'm assuming in
the main DC - in which case you would anyway need to ship your Kafka
logs from the remote DC to the main DC correct?


On Wed, Nov 27, 2013 at 12:47:01PM -0500, Andrew Otto wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB