Just wanted to clarify: the topic.metadata.refresh.interval.ms would apply
to producers - and mainly with ack = 0. (If ack = 1, then a metadata
request would be issued on this exception although even with ack > 0 it is
useful to have the metadata refresh for refreshing information about how
many partitions are available.)

For replica fetchers (Vadim's case) the exceptions would persist for as
long as the new leader for the replica in question is elected. It should
not take too long. When the leader is elected, the controller will send out
an RPC to the new leaders and followers and the above exceptions will go

Also, to answer your question: the "right" way to shutdown an 0.8 cluster
is to use controlled shutdown. That will not eliminate the exceptions, but
they are more for informative purposes and are non-fatal (i.e., the logging
can probably be improved a bit).

On Fri, Jun 28, 2013 at 11:47 AM, David DeMaagd <[EMAIL PROTECTED]>wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB