I use Kafka 0.8 high level consumer reads message from topic stream, 3 replica and 10 paritions.
When I use 10 threads read the stream and runing for some time (one hour or one day),
some threads block at m_stream.iterator().hasNext(), but the parition still has lots of messages.
I check consumer's fetch.message.max.bytes and broker's message.max.bytes, there is no
message size bigger than these values.
The consumer configure is props.put("zookeeper.session.timeout.ms", "4000"); props.put("zookeeper.sync.time.ms", "200"); props.put("auto.commit.interval.ms", "1000"); Please give me some option about how to avoid consumer block.
Is there some configure parameter can fix this problem.
Since you have a cluster, why not distribute the consumers in different nodes instead of threads. I think that's the only way to scale up with kafka. Question here: if there are more and more high-level consumers, is there a bottleneck on the zookeeper? On Tue, Nov 12, 2013 at 9:27 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
Apache Lucene, Apache Solr and all other Apache Software Foundation project and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext