Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> flume-ng failure recovery


Copy link to this message
-
Re: flume-ng failure recovery
Moreover the reader part can also keep the list of hosts and act as an load
balancer and also as failover mechanism...

Regards,
Som

On Wed, Jul 18, 2012 at 12:13 PM, shekhar sharma <[EMAIL PROTECTED]>wrote:

> Dont use tail source since it does not maintain the state where it
> left..so i would suggest something like this::
>
> (1)Implement a Reader part which reads the events from the file and also
> maintains the state if something  goes  wrong)
> (2) convert the events to the Flume Event type and using a RPC client send
> the events to the FLume Avro Source
>
> Regards,
> SOm
>
>  On Wed, Jul 18, 2012 at 11:39 AM, Justin Workman <
> [EMAIL PROTECTED]> wrote:
>
>> We use a tail -F -n0. This will result in the tail command starting at
>> the beginning of the file and replaying all events.
>>
>> This will however result in duplicate events that you will need to deal
>> with.
>>
>> Sent from my iPhone
>>
>> On Jul 17, 2012, at 11:53 PM, Jagadish Bihani <
>> [EMAIL PROTECTED]> wrote:
>>
>> > Hi
>> >
>> > We want to deploy flume-ng in the production environment in our
>> organization.
>> > Here is the following scenario for which I am not able to find the
>> answer:
>> >
>> > 1. We receive logs using 'tail -f' source.
>> > 2. Now  the agent process gets killed.
>> > 3. We restart it.
>> > 4. How would the restarted agent will know the correct state of the
>> file.
>> > Because in the meantime log file would have been modified and agent has
>> > no way of knowing from where to resume?
>> >
>> > Could you please help me in identifying how to tackle this scenario?
>> >
>> > P.S. Instead of tail -f any other command can be used which doesnt
>> modify the log file.
>> >
>> > Regards,
>> > Jagadish
>>
>
>