Ah, good question we really should add this to the documentation.

We run a cluster per data center. All writes always go to the data-center
local cluster. Replication to aggregate clusters that provide the "world
wide" view is done with mirror maker.

It is also fine to write to or read from a kafka cluster in a remote colo,
though obviously you have to think about the case where the cluster is not
accessible due to network access.

Kafka is not designed to run a single cluster spread across geographically
disparate colos and you would see a few problems in that scenario. The
first is that, as you noted, the latency will be terrible as it will block
on the slowest response from all datacenters. This could be avoided if you
lowered the request.required.acks to 1, but that would impact durability
guarantees. The second problem is that Kafka will not remain available in
the presence of network partitions so if the inter-datacenter link failed
one datacenter would lose its cluster. Finally we have not done anything to
attempt to optimize partition placement by colo so you would not actually
have redundancy between colos because we would often place all replicas in
a single colo.

On Tue, Jul 9, 2013 at 9:34 PM, Calvin Lei <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB