Slowly trying to understand it, have to wramp up on my scala.
When the flush/sink occurrs, does it pull items of the collection 1 by 1 or
does it do this in bulk somehow while locking the collection?
On Mon, May 7, 2012 at 3:14 PM, Neha Narkhede <[EMAIL PROTECTED]>wrote:
> The related code is in kafka.log.*. The message to file persistence is
> inside FileMessageSet.scala.
> On Mon, May 7, 2012 at 12:12 PM, S Ahmed <[EMAIL PROTECTED]> wrote:
> > I can barely read scala, but I'm curious where the applications performs
> > the operation of taking the in-memory log and persisting it to the
> > database, all the while accepting new log messages and removing the keys
> > the messages that have been persisted to disk.
> > I'm guessing you have used the concurrenthashmap where the key is a
> > and once the flush timeout has been reached a background thread will
> > somehow persist and remove the keys.