Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Out of memory exception


Copy link to this message
-
Re: Out of memory exception
This is a common JVM tuning scenario. You should adjust the values based on
empirical data. See the heap size section of
http://docs.oracle.com/cd/E21764_01/web.1111/e13814/jvm_tuning.htm
On Aug 30, 2013 10:40 PM, "Vadim Keylis" <[EMAIL PROTECTED]> wrote:

> I followed linkedin setup example in the docs and located 3g for heap size.
>
> java -Xmx3G -Xms3G -server -XX:+UseCompressedOops -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
> -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -
>
>  After a day of normal run scenario I discovered the following errors
> flooding the error log. I can increase the heap size its not a problem, but
> I want to be able properly estimate how much memory kafka will use in order
> to predict system limits as we add topics, consumers and etc.
>
> Thanks so much in advance,
> Vadim
>
> [2013-08-29 23:57:14,072] ERROR [ReplicaFetcherThread--1-6], Error due to
>  (kafka.server.ReplicaFetcherThread)
> java.lang.OutOfMemoryError: Java heap space
>         at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:114)
>         at java.io.OutputStreamWriter.write(OutputStreamWriter.java:203)
>         at java.io.Writer.write(Writer.java:140)
>         at org.apache.log4j.helpers.QuietWriter.write(QuietWriter.java:48)
>         at
> org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:302)
>         at
>
> com.ordersets.utils.logging.CustodianDailyRollingFileAppender.subAppend(CustodianDailyRollingFileAppender.java:299)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at
> org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at
>
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.warn(Category.java:1060)
>         at kafka.utils.Logging$class.warn(Logging.scala:88)
>         at kafka.utils.ShutdownableThread.warn(ShutdownableThread.scala:23)
>         at
>
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:100)
>         at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
>         at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
>
>
> [2013-08-30 10:21:44,932] ERROR [Kafka Request Handler 4 on Broker 5],
> Exception when handling request (kafka.server.KafkaRequestHandler)
> java.lang.NullPointerException
>         at
> kafka.api.FetchResponsePartitionData.<init>(FetchResponse.scala:46)
>         at kafka.api.FetchRequest$$anonfun$2.apply(FetchRequest.scala:158)
>         at kafka.api.FetchRequest$$anonfun$2.apply(FetchRequest.scala:156)
>         at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>         at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>         at
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:178)
>         at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:347)
>         at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:347)
>         at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
>         at scala.collection.immutable.HashMap.map(HashMap.scala:38)
>         at kafka.api.FetchRequest.handleError(FetchRequest.scala:156)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:78)
>         at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
>         at java.lang.Thread.run(Thread.java:662)
> [2013-08-30 10:05:17,214] ERROR [Kafka Request Handler 6 on Broker 5],
> Exception when handling request (kafka.server.KafkaRequestHandler)
> java.lang.NullPointerException
>         at
> kafka.api.FetchResponsePartitionData.<init>(FetchResponse.scala:46)
>         at kafka.api.FetchRequest$$anonfun$2.apply(FetchRequest.scala:158)

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB