Wanted to check if there is any known limit on the # of topics in a Kafka cluster? I wanted to design a system which has say 5k topics and multi-threaded consumers reading messages from these topics. Does anyone have experience with such large topic size? I see in Kafka's page a test for throughput w.r.t. to topic size, but the max topic size is 50.
We've seen big performace degradation when we tested 1024 topics, so we've opted to go for a much smaller topic count (< 100).
On the read side, I think performance is largely driven by the ability of the operating system to effectively cache access to #partitions*topic files. Clearly if you divide your available memory by 1024 topics you'll have less in the file cache per topic than you would with only 3 topics (for example). On Wed, Mar 27, 2013 at 11:14 AM, Jun Rao <[EMAIL PROTECTED]> wrote: Matthew Rathbone Foursquare | Software Engineer | Server Engineering Team [EMAIL PROTECTED] | @rathboma <http://twitter.com/rathboma> | 4sq<http://foursquare.com/rathboma>
Matthew Rathbone 2013-03-27, 16:49
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation projects and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext