The second method (replication across DCs) is not recommended.
The first set up would work provided the set of topics you are
mirroring from A->B is disjoint from the set of topics you are
mirroring from B->A (i.e., to avoid a mirroring loop).
On Fri, Jun 28, 2013 at 5:29 PM, Yu, Libo <[EMAIL PROTECTED]> wrote:
> I can think of two failover strategies. I am not sure which one is the right way to go.
> First method. set up kafka server A on cluster 1 and set up another server B on cluster 2.
> The two clusters are in different data centers. Use customized mirrormaker to sync between
> the two servers. Use one server in production and use the other one as contingency. If
> server A is down, server B will be used (this can be transparent to publishers/consumers).
> There may be a lag between the two servers before server A is down . But after A is back,
> the customized mirrormaker can sync the two. And eventually B will have all the data A had
> before the failure.
> Second method. Set up one kafka server using cluster 1 and cluster 2. When creating a topic ,
> always use two replications. For each partition, assign one replication to a broker in cluster 1
> and assign the other replication to a broker in cluster 2. So kafka will handle the syncing and failover
> for the two clusters. Is that a right (expected) way to use kafka?