Then opened a console producer and a console consumer. I type a few lines on the producer and then the two kafka brokers that should have the two replicas start throwing errors to the logs, the only way to get kafka back to normal again is by deleting all of the topic data in kafka and in zookeeper and restarting.
The errors are: broker1:
2014-06-17/01:40:32.137/PDT ERROR [kafka-processor-9092-5]: kafka.network.Processor - Closing socket for /10.101.4.218 because of error^C
kafka.common.KafkaException: This operation cannot be completed on a complete request.
at java.lang.Thread.run(Thread.java:744) broker2
2014-06-17/01:40:29.127/PDT WARN [ReplicaFetcherThread-0-215]: kafka.consumer.SimpleConsumer - Reconnect due to socket error: null
The network is 10gig and so far has not given any issues I think its extremely unlikely that it could be network (all ports are open and all communication happens on an internal lan).
I'm running consumers and producers on the nodes where the brokers are running and they are consuming and producing data at high volumes between the nodes. While doing the test I was not running any producers or consumers other than the test kafka-console-producer and kafka-console-consumer. On Tue, Jun 17, 2014 at 4:28 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation project and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext