Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # dev >> Re: [jira] [Issue Comment Deleted] (FLUME-2173) Exactly once semantics for Flume


Copy link to this message
-
Re: [jira] [Issue Comment Deleted] (FLUME-2173) Exactly once semantics for Flume
Hi Hari,

Thanks for bringing this up for discussion. I think it will be tremendously
beneficial to Flume users if we can extend once-only guarantee. Your
initial suggestion seems reasonable of having a Sink trap the events and
reference a global state to drop duplicates. Rather than pushing this
functionality to Sinks is there any other way by which we can make it more
generally available? The reason I raise this concern is because otherwise
this becomes a feature of a particular sink and not every sink will have
the necessary implementation opportunity to get this.

Alternatively what do you think about this being done at the channel level?
Since we normally do not see custom implementations of channels, an
implementation that works with the channel will likely be more useful for
the broader community of Flume users.

Regards,
Arvidn
On Sun, Aug 25, 2013 at 9:07 AM, Hari Shreedharan <[EMAIL PROTECTED]
> wrote:

> Hi Gabriel,
>
> Thanks for your input. The part where we use replicating channel selector
> to purposefully replicate - we can easily make it configurable whether to
> delete deplicate events or not. That should not be difficult to do.
>
> The 2nd point where multiple agents/sinks could write the same event can
> be solved by namespacing the events into different namespaces. So each sink
> checks one namespace for the event, and multiple sinks can belong to the
> same namespace - this way, if multiple events are going to write to the
> same HDFS cluster, then if a duplicate occurs we can easily drop it.
> Unfortunately, this also does not work around the who
> HDFS-writing-but-throwing issue.
>
> I agree updating ZK will hit latency, but that is the cost to build once
> only semantics on a highly flexible system. If you look at the algorithm,
> we actually go to ZK only once per event (to create, there are no updates)
> - this can even happen per batch if needed to reduce ZK round trips (though
> I am not sure if ZK provides a batch API).
>
> The two phase commit approach sounds good, but it might require interface
> changes which can now only be made in Flume 2.x. Alse, if we use a single
> UUID combined with several flags we might be able to work duplicates caused
> by this replication.
>
>
> Thanks,
> Hari
>
>
> On Sunday, August 25, 2013 at 7:24 AM, Gabriel Commeau wrote:
>
> > Hi Hari,
> >
> >
> > I deleted my comment (again). The mailing list is probably a better
> avenue
> > to discuss this ­ sorry about that! :)
> >
> > I can find at least one other way duplicate events can occur, and so what
> > I provided helps to reduce duplicate events but is not sufficient to
> > guaranty exactly once semantics. However, I still think that using a
> > 2-phase commit when writing to multiple channels would benefit Flume.
> This
> > should probably be a different ticket though.
> >
> > Concerning the algorithm you offered, the case of replicating channel
> > selector should probably be handled, by creating a new UUID for each
> > duplicate message.
> > I hope this helps.
> >
> >
> > Regards,
> >
> > Gabriel
> >
> >
> > On 8/25/13 7:27 AM, "Gabriel Commeau (JIRA)" <[EMAIL PROTECTED] (mailto:
> [EMAIL PROTECTED])> wrote:
> >
> > >
> > > [
> > >
> https://issues.apache.org/jira/browse/FLUME-2173?page=com.atlassian.jira.p
> > > lugin.system.issuetabpanels:all-tabpanel ]
> > >
> > > Gabriel Commeau updated FLUME-2173:
> > > -----------------------------------
> > >
> > > Comment: was deleted
> > >
> > > (was: I would approach the problem from a different angle. The way I
> see
> > > it, there are two main places where duplicates can occur: when using
> > > multiple channels for one source (using a replication channel
> selector),
> > > and when the "output" of a sink cannot guaranty whether the event has
> > > truly been committed or not (as you pointed out for example, HDFS
> writing
> > > the event but throwing an exception).
> > > Actually, I don¹t think there is a general solution to the problem of
> > > output systems (e.g. HDFS) that do not guaranty whether the event is
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB