I enabled TRACE logging on the producer, and verified that it
successfully wrote out the bytes to the server (but this was after the
last log flush on the server, where trace logging was also enabled).
I'm quite certain that the server is receiving the lost data, and then
shutting down, before flushing the last few log lines received.
It would seem to be an easy fix to update KafkaServer.shutdown, to
close all sockets, then do one final log flush that doesn't wait for
the flush interval, and then close down.
On Thu, Mar 28, 2013 at 6:45 AM, Neha Narkhede <[EMAIL PROTECTED]> wrote:
> How do you know the server had written the lost data to its log ? In Kafka
> 0.7, data could be lost from the producer's or server's socket buffer. You
> can verify this by running DumpLogSegments before and after shutdown.
> On Thursday, March 28, 2013, SuoNayi wrote:
>> You may resubscribe the list or the list may be removed in future.
>> >We are managing our kafka clusters by doing rolling restarts (e.g. we
>> >cleanly shutdown and restart each broker one at a time).
>> >I'm working through an issue, whereby we are losing messages sent to a
>> >kafka broker, right before the broker is shutdown cleanly.
>> >I'm still using 0.7.2, so not sure if this is also the behavior in 0.8.
>> >I am trying to cleanly shutting down the kafka server by calling
>> >kafka.server.KafkaServer.shutdown(). I then wait for the shutdown to
>> >complete, by calling awaitShutdown().
>> >However, it looks like this does not attempt to flush all logs before
>> >quitting, as I had assumed was the case. Is there a way to make sure
>> >this happens before the server stops?