I run 'kafka-0.7.2' on a single node and today I got some error logs below.
... [2012-12-26 08:51:18,496] INFO End registering broker topic /brokers/topics/jnits/1 (kafka.server.KafkaZooKeeper) 161557 [2012-12-26 08:51:19,103] INFO Closing socket connection to / 172.18.116.52. (kafka.network.Processor) 161558 [2012-12-26 08:51:18,416] ERROR OOME with size 313705 (kafka.network.BoundedByteBufferReceive) 161559 java.lang.OutOfMemoryError: Java heap space 161560 [2012-12-26 08:51:18,341] ERROR OOME with size 317477 (kafka.network.BoundedByteBufferReceive) 161561 java.lang.OutOfMemoryError: Java heap space 161562 [2012-12-26 08:51:19,178] ERROR Closing socket for /172.18.0.34 because of error (kafka.network.Processor) 161563 java.lang.OutOfMemoryError: Java heap space 161564 [2012-12-26 08:51:19,178] ERROR Closing socket for /172.18.116.46 because of error (kafka.network.Processor) 161565 java.lang.OutOfMemoryError: Java heap space 161566 [2012-12-26 08:51:19,375] ERROR OOME with size 269336 (kafka.network.BoundedByteBufferReceive) 161567 java.lang.OutOfMemoryError: Java heap space 161568 [2012-12-26 08:51:19,375] ERROR Closing socket for /172.18.113.38 because of error (kafka.network.Processor) 161569 java.lang.OutOfMemoryError: Java heap space ...
The size of each message is about 400KB. Maybe the messages are produced too fast or the disk can not be written at such a rate(about 33MB/s/disk)? Can anyone help me explain it? Thanks a lot. *Xingcan*
What happens if you change the consumer to write to disk or to the console (if text)? Do you still have the issues? I'm wondering if it's your algorithms that are taking up so much memory. I would say before you start tweaking options randomly, start with the simplest case.
________________________________________ From: Jun Rao [[EMAIL PROTECTED]] Sent: Thursday, December 27, 2012 1:25 AM To: [EMAIL PROTECTED] Cc: Kafka-users Subject: Re: A "Java heap space" question
400KB messages are relatively large. How many messages are you sending per sec? How big is your JVM heap? You may need a bigger heap size.
On Wed, Dec 26, 2012 at 5:41 PM, xingcan <[EMAIL PROTECTED]> wrote:
Jun, The peak sending rate is about 200/s(there are 3 SAS disks for 3 partitions). We use sync producer for we can not tolerate message discarding. The truth is our messages' producing rate is increasing these days and the kafka had been running perfectly for a long time before. I'll use a bigger heap size and monitor for a period. On such a producing rate and message size, is there any advices for config files(e.g socket buffer) you can give me? Thanks.
Then, compression won't help. Try increasing the heap size. If that doesn't help, you may need to use more brokers.
On Thu, Dec 27, 2012 at 10:26 PM, xingcan <[EMAIL PROTECTED]> wrote:
Jun Rao 2012-12-28, 22:32
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation projects and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext