-Re: default producer to retro-fit existing log files collection process?
As Neha says the best thing we currently provide is console producer.
Providing a more flexible framework specifically targeted at log slurping
would be a cool open source project.
On Wed, Sep 4, 2013 at 7:34 AM, Neha Narkhede <[EMAIL PROTECTED]>wrote:
> Quick and dirty solution would be to somehow tail the logs and use console
> producer to send the data to kafka.
> On Sep 3, 2013 2:09 PM, "Maxime Petazzoni" <[EMAIL PROTECTED]>
> > Tomcat uses commons-logging for logging. You might be able to write an
> > adapter towards Kafka, in a similar way as the log4j-kafka appender. I
> > think this would be cleaner than writing something Tomcat-specific that
> > intercepts your requests and logs them through Kafka.
> > /Max
> > --
> > Maxime Petazzoni
> > Sr. Platform Engineer
> > m 408.310.0595
> > www.turn.com
> > ________________________________________
> > From: Yang [[EMAIL PROTECTED]]
> > Sent: Tuesday, September 03, 2013 10:09 AM
> > To: [EMAIL PROTECTED]
> > Subject: default producer to retro-fit existing log files collection
> > process?
> > in many setups we have production web server logs rotated on local disks,
> > and then collected using some sort of scp processes.
> > I guess the ideal way to use kafka is to write a module for tomcat and
> > catches the request , send through the kafka api. but is there a "quick
> > dirty" producer included from kafka to just read the existing rotated
> > and send through kafka API? this would avoid having to touch the existing
> > java code
> > thanks
> > Yang