This should happen when there is a backlog of data larger than the
fetch size the consumer is using.
Also, just to be clear this problem is something client implementation
needs to handle but not something the user of the client needs to
On Thu, Jun 27, 2013 at 12:29 PM, Vadim Keylis <[EMAIL PROTECTED]> wrote:
> Jay. I assume this is problem exists in the consumer. How this can this problem be triggered so I could test my high level consumer.
> On Jun 26, 2013, at 9:21 AM, Jay Kreps <[EMAIL PROTECTED]> wrote:
>> Yeah, that is true. I thought I documented that, but looking at the
>> protocol docs, it looks like I didn't.
>> I agree this is kind of a pain in the ass. It was an important
>> optimization in 0.7 because we didn't know where the message
>> boundaries were but in 0.8 we have a fast way to compute message
>> boundaries and in fact we normally don't give out partial messages, I
>> think this happens when you hit the size threshold of your fetch
>> request (e.g. 1MB) instead of searching for the nearest message
>> boundary we give you that chunk of log. I think we should consider
>> just fixing it entirely in the next release--the perf hit is pretty
>> minor and it is an annoyance and source of bugs for clients.
>> For now you have to handle it, so I added documentation to the protocol wiki.
>> On Wed, Jun 26, 2013 at 8:59 AM, Bob Potter <[EMAIL PROTECTED]> wrote:
>>> I'm developing a client for kafka 0.8. It looks like a fetch response will
>>> sometimes end with a partial message. I understand why this might be the
>>> case but it was unexpected and as far as I can tell undocumented.
>>> Is my understanding correct or am i missing something?