Re: Blocking Behavior of Metadata Refresh on New Producer
Yeah this is a good one to discuss.
Current send will block in two conditions:
1. You are beginning a new data batch and have run out of buffer memory and
2. Regardless of block.on.buffer.full the first request for each topic will
block on fetching metadata that contains partition info for that topic.
The blocking for (2) is bounded by metadata.fetch.timeout.ms, so it won't
actually block forever (but it may block for a while).
Let me describe the rationale on 2. Basically we want to avoid having the
client fetch and maintain the full set of partition info because it may be
biggish for very large clusters. We also want to avoid pre-configuring the
set of topics a client will use. This means we have to fetch dynamically.
We could introduce a separate queue to hold these requests while we fetch
metadata but that would mess up our memory bounds and would be fairly
A user who wants truly non-blocking behavior at send time can avoid this by
calling partitionsFor(topic) at producer initialization time to fetch the
metadata for the topics it wants. This call will block, but it will ensure
each subsequent fetch won't.
For the case where all entries in metadata.broker.list are wrong one option
we had discussed was sanity checking these when you call new KafkaProducer
and forcing the establishment of one connection to something in that list
so we could fail fast. The only downside of that is that in a test
environment you might bring up the client and server simultaneously and
this would enforce an ordering between the two.
On Thu, Feb 13, 2014 at 11:00 AM, Guozhang Wang <[EMAIL PROTECTED]> wrote: