Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka, mail # user - default producer to retro-fit existing log files collection process?


Copy link to this message
-
Re: default producer to retro-fit existing log files collection process?
Jay Kreps 2013-09-04, 16:45
As Neha says the best thing we currently provide is console producer.
Providing a more flexible framework specifically targeted at log slurping
would be a cool open source project.

-Jay
On Wed, Sep 4, 2013 at 7:34 AM, Neha Narkhede <[EMAIL PROTECTED]>wrote:

> Quick and dirty solution would be to somehow tail the logs and use console
> producer to send the data to kafka.
>
> Thanks,
> Neha
> On Sep 3, 2013 2:09 PM, "Maxime Petazzoni" <[EMAIL PROTECTED]>
> wrote:
>
> > Tomcat uses commons-logging for logging. You might be able to write an
> > adapter towards Kafka, in a similar way as the log4j-kafka appender. I
> > think this would be cleaner than writing something Tomcat-specific that
> > intercepts your requests and logs them through Kafka.
> >
> > /Max
> > --
> > Maxime Petazzoni
> > Sr. Platform Engineer
> > m 408.310.0595
> > www.turn.com
> >
> > ________________________________________
> > From: Yang [[EMAIL PROTECTED]]
> > Sent: Tuesday, September 03, 2013 10:09 AM
> > To: [EMAIL PROTECTED]
> > Subject: default producer to retro-fit existing log files collection
> > process?
> >
> > in many setups we have production web server logs rotated on local disks,
> > and then collected using some sort of scp processes.
> >
> > I guess the ideal way to use kafka is to write a module for tomcat and
> > catches the request , send through the kafka api. but is there a "quick
> and
> > dirty" producer included from kafka  to just read the existing rotated
> logs
> > and send through kafka API? this would avoid having to touch the existing
> > java code
> >
> > thanks
> > Yang
> >
>