Is the replication factor system in  Kafka 0.8 suitable for creating single clusters which span across data centers ?  (up to 3)

I am looking for a system where I don't loose messages, and can effectively 'fail over' to a different datacenter for processing if/when the primary goes down.   If I read correctly,  any message delivered to a Kafka broker, will be copied to its Replica/s.  And a producer could deliver messages to any broker in the same replica set.  

Is that correct ?
I am aware there are several zookeeper issues around multi DC support which I need to sort out,   so this question is specific for the Kafka portion.

Note: My main consumer from Kafka will be STORM.



NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB