I'm wondering if there's a good way to have a heterogenous kafka cluster (specifically, if we have nodes with different sized disks). So, we might want a larger node to receive more messages than a smaller node, etc.
I expect there's something we can do with using a partitioner that has specific knowledge about the hosts in the cluster, but this feels messy, to have this config on every producer client....
Just resource allocation issues. E.g. imagine having an existing kafka cluster with one machine spec, and getting access to a few more hosts to augment the cluster, which are newer and therefore have twice the disk storage. I'd like to seamlessly add them into the cluster, without having to replace everything en masse. Thus, it would be nice for the newer ones to take proportionally more load based on the relative storage available, etc.
Jason On Fri, May 17, 2013 at 2:34 PM, Neha Narkhede <[EMAIL PROTECTED]>wrote:
In 0.8, you can create topics manually and explicitly specify the replica to broker mapping. Post 0.8, we can think of some more automated ways to deal with this (e.g., let each broker carry some kind of weight).
Jun On Fri, May 17, 2013 at 2:29 PM, Jason Rosenberg <[EMAIL PROTECTED]> wrote:
Have you thought about integrating Kafka into a distributed resource management framework like Hadoop YARN (which would probably leverage HDFS) or Mesos? On May 23, 2013 11:31 PM, "Neha Narkhede" <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation projects and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext