Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # dev >> Re: [jira] [Issue Comment Deleted] (FLUME-2173) Exactly once semantics for Flume


Copy link to this message
-
Re: [jira] [Issue Comment Deleted] (FLUME-2173) Exactly once semantics for Flume
Hi Gabriel,  

Thanks for your input. The part where we use replicating channel selector to purposefully replicate - we can easily make it configurable whether to delete deplicate events or not. That should not be difficult to do.  

The 2nd point where multiple agents/sinks could write the same event can be solved by namespacing the events into different namespaces. So each sink checks one namespace for the event, and multiple sinks can belong to the same namespace - this way, if multiple events are going to write to the same HDFS cluster, then if a duplicate occurs we can easily drop it. Unfortunately, this also does not work around the who HDFS-writing-but-throwing issue.

I agree updating ZK will hit latency, but that is the cost to build once only semantics on a highly flexible system. If you look at the algorithm, we actually go to ZK only once per event (to create, there are no updates) - this can even happen per batch if needed to reduce ZK round trips (though I am not sure if ZK provides a batch API).

The two phase commit approach sounds good, but it might require interface changes which can now only be made in Flume 2.x. Alse, if we use a single UUID combined with several flags we might be able to work duplicates caused by this replication.
Thanks,
Hari
On Sunday, August 25, 2013 at 7:24 AM, Gabriel Commeau wrote:

> Hi Hari,
>  
>  
> I deleted my comment (again). The mailing list is probably a better avenue
> to discuss this ­ sorry about that! :)
>  
> I can find at least one other way duplicate events can occur, and so what
> I provided helps to reduce duplicate events but is not sufficient to
> guaranty exactly once semantics. However, I still think that using a
> 2-phase commit when writing to multiple channels would benefit Flume. This
> should probably be a different ticket though.
>  
> Concerning the algorithm you offered, the case of replicating channel
> selector should probably be handled, by creating a new UUID for each
> duplicate message.  
> I hope this helps.
>  
>  
> Regards,
>  
> Gabriel
>  
>  
> On 8/25/13 7:27 AM, "Gabriel Commeau (JIRA)" <[EMAIL PROTECTED] (mailto:[EMAIL PROTECTED])> wrote:
>  
> >  
> > [  
> > https://issues.apache.org/jira/browse/FLUME-2173?page=com.atlassian.jira.p
> > lugin.system.issuetabpanels:all-tabpanel ]
> >  
> > Gabriel Commeau updated FLUME-2173:
> > -----------------------------------
> >  
> > Comment: was deleted
> >  
> > (was: I would approach the problem from a different angle. The way I see
> > it, there are two main places where duplicates can occur: when using
> > multiple channels for one source (using a replication channel selector),
> > and when the "output" of a sink cannot guaranty whether the event has
> > truly been committed or not (as you pointed out for example, HDFS writing
> > the event but throwing an exception).
> > Actually, I don¹t think there is a general solution to the problem of
> > output systems (e.g. HDFS) that do not guaranty whether the event is
> > truly committed or not, because we¹d need to enforce this requirement on
> > 3rd party systems (relative to Flume). I see it as a problem to be solved
> > on a case-by-case basis for each sink.
> >  
> > However, I would like to suggest a solution to the first problem. Here is
> > an example to illustrate it: Pretend an agent has a source that writes to
> > two (required) channels. As part of a transaction, the channel processor
> > will commit to the first channel, which succeeds, and then to the second
> > channel, which fails. The whole transaction will fail, but the event has
> > already been committed once to the first channel. When the transaction is
> > retried, the event will be duplicated.
> > The solution I discussed a few months back with Mike P. was to use a
> > two-phase commit when writing to channels. This insures that the events
> > are not actually committed to a channel if the following ones fail. This
> > however will require an API change on the Channel interface. I would