Yes exactly.

Lowering queuedchunks.max shouldn't help if the problem is what I
described. That options controls how many chunks the consumer has ready in
memory for processing. But we are hypothesisizing that your problem is
actually that the individual chunks are just too large leading to the
consumer spending a long time processing from one partition before it gets
the next chunk.

On Mon, Aug 26, 2013 at 11:18 AM, Ian Friedman <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB