Home | About | Sematext search-lucene.com search-hadoop.com search-devops.com metrics + logs = try SPM and Logsene for free
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> seeing poor consumer performance in 0.7.2


Copy link to this message
-
Re: seeing poor consumer performance in 0.7.2
Thanks Jun, your suggestion helped me quite a bit.

Since earlier this week I've been able to work out the issues (at least it
seems like it for now). My consumer is now roughly processing messages at
the rate they are being produced with an acceptable amount of lag end to
end. Here is an overview of the issues I had. Let me know if the way I
resolved things makes sense:

   - many serialization errors in the producers. Fixing these eliminated
   what were previously perceived as lost or delayed messages.
   - one of the producers was not accessible through the VIP we were
   sending messages to. There was also a bug in the healthcheck that caused
   netscaler to drop one of the producers. Both of these contributed to
   sending too many messages to one producer which filled up the blocking
   queues.
   - I had to increase queue.size on the producers several times (currently
   at 320k). This may now be unnecessarily high given my next point
   - Increased batch.size on the producers several times. The last increase
   (batch.size=1600) is what finally got things going at the rate I am happy
   with.
   - Decreased num.partitions and log.flush.interval on the brokers from
   64/10k to 32/100 in order to lower the average flush time (we were
   previously always hitting the default flush interval since no partitions
   ever accumulated 10k messages). The flush times are currently < 100ms (not
   sure if this is too low but everything seems to be working). The avg flush
   time was previously 1 second.
   - Increased fetch.size and queuedchunks.max on the consumers several
   times and ended at 80MB/100k. This was before I made a bunch of the changes
   on the producer side so these may be unnecessarily high as well.

Once again, thanks for all of the help. I'm curious to know which if any of
the changes I made were unnecessary.

Andrew
On Tue, Apr 23, 2013 at 7:53 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB