HI guys,

We are using kafka0.7.2, in our cluster, we use customized partition
functions in producer.
Say, we compute partition id with user id in our log.

But when we use mirror maker to pull log, we find that mirror maker uses
random partition to push log into destination brokers.

In my mind, we have to decompress logs from consumer and re-partition them
again. So, I am wondering if there are any good ways to keep partition rule
in mirror maker.



NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB