We are looking at using kafka 0.8-beta1 and high level consumer.

kafka 0.7 consumer supported backoff.increment.ms to avoid repeatedly
polling a broker node which has no new data. It appears that this property
is no longer supported in 0.8. What is the reason?

Instead there is fetch.wait.max.ms which is the maximum amount of time the
server will block before answering the fetch request if there isn't
sufficient data to immediately satisfy fetch.min.bytes

We have different use cases where different producers produce messages
after regular intervals, for e.g. every minute, every 20 minutes, or once
daily or once weekly. But once messages are produced they need to be
consumed and processed asap.

In order to support these use cases, and to avoid frequent polling it feels
like we need to have very large value for fetch.wait.max.ms for the daily
and weekly topic consumers. I am looking for best practice tip here.

Will this keep the connection open between consumer connector and broker
for fetch.wait.max.ms duration? How will this affect other consumers on the
same machine which expect to consumer messages on a per minute basis?

Secondly, from other discussions I have read that it's best to keep
consumer.timeout.ms=-1 for high level consumer. I was wondering in which
situation is it beneficial to handle ConsumerTimeoutException?

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB