Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> A "Java heap space" question


+
xingcan 2012-12-27, 01:42
+
Jun Rao 2012-12-27, 06:25
+
xingcan 2012-12-28, 02:12
Copy link to this message
-
Re: A "Java heap space" question
Are messages compressed? If not, compressing them could save java heap
space.

Thanks,

Jun
On Thu, Dec 27, 2012 at 6:11 PM, xingcan <[EMAIL PROTECTED]> wrote:

> Jun,
> The peak sending rate is about 200/s(there are 3 SAS disks for 3
> partitions). We use sync producer for we can not tolerate message
> discarding. The truth is our messages' producing rate is increasing these
> days and the kafka had been running perfectly for a long time before. I'll
> use a bigger heap size and monitor for a period. On such a producing rate
> and message size, is there any advices for config files(e.g socket buffer)
> you can give me? Thanks.
>
>
>
> 2012/12/27 Jun Rao <[EMAIL PROTECTED]>
>
> > 400KB messages are relatively large. How many messages are you sending
> per
> > sec? How big is your JVM heap? You may need a bigger heap size.
> >
> > Thanks,
> >
> > Jun
> >
> > On Wed, Dec 26, 2012 at 5:41 PM, xingcan <[EMAIL PROTECTED]> wrote:
> >
> > > Hi,
> > >
> > > I run 'kafka-0.7.2' on a single node and today I got some error logs
> > below.
> > >
> > > ...
> > >  [2012-12-26 08:51:18,496] INFO End registering broker topic
> > > /brokers/topics/jnits/1 (kafka.server.KafkaZooKeeper)
> > > 161557 [2012-12-26 08:51:19,103] INFO Closing socket connection to /
> > > 172.18.116.52. (kafka.network.Processor)
> > > 161558 [2012-12-26 08:51:18,416] ERROR OOME with size 313705
> > > (kafka.network.BoundedByteBufferReceive)
> > > 161559 java.lang.OutOfMemoryError: Java heap space
> > > 161560 [2012-12-26 08:51:18,341] ERROR OOME with size 317477
> > > (kafka.network.BoundedByteBufferReceive)
> > > 161561 java.lang.OutOfMemoryError: Java heap space
> > > 161562 [2012-12-26 08:51:19,178] ERROR Closing socket for
> > /172.18.0.34because
> > > of error (kafka.network.Processor)
> > > 161563 java.lang.OutOfMemoryError: Java heap space
> > > 161564 [2012-12-26 08:51:19,178] ERROR Closing socket for
> > > /172.18.116.46 because
> > > of error (kafka.network.Processor)
> > > 161565 java.lang.OutOfMemoryError: Java heap space
> > > 161566 [2012-12-26 08:51:19,375] ERROR OOME with size 269336
> > > (kafka.network.BoundedByteBufferReceive)
> > > 161567 java.lang.OutOfMemoryError: Java heap space
> > > 161568 [2012-12-26 08:51:19,375] ERROR Closing socket for
> > > /172.18.113.38 because
> > > of error (kafka.network.Processor)
> > > 161569 java.lang.OutOfMemoryError: Java heap space
> > > ...
> > >
> > > The size of each message is about 400KB. Maybe the messages are
> produced
> > > too fast or the disk can not be written at such a rate(about
> > 33MB/s/disk)?
> > > Can anyone help me explain it? Thanks a lot.
> > > --
> > > *Xingcan*
> > >
> >
>
>
>
> --
> *Xingcan*
>

 
+
xingcan 2012-12-28, 06:27
+
Jun Rao 2012-12-28, 22:32
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB