Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> producer exceptions when broker dies


Copy link to this message
-
Re: producer exceptions when broker dies
Thanks Guozhang, it makes sense if it's by design. Just wanted to ensure
i'm not doing something wrong.
On Fri, Oct 25, 2013 at 5:57 PM, Guozhang Wang <[EMAIL PROTECTED]> wrote:

> As we have said, the timeout exception does not actually mean the message
> is not committed to the broker. When message.send.max.retries is 0 Kafka
> does guarantee "at-most-once" which means that you will not have
> duplicates, but not means that all your exceptions can be treated as
> "message not delivered". In your case, 1480 - 1450 = 30 messages are the
> ones that are actually sent, not the ones that are duplicates.
>
> Guozhang
>
>
> On Fri, Oct 25, 2013 at 5:00 PM, Kane Kane <[EMAIL PROTECTED]> wrote:
>
> > There are a lot of exceptions, I will try to pick an example of each:
> > ERROR async.DefaultEventHandler - Failed to send requests for topics
> > benchmark with correlation ids in [879,881]
> > WARN  async.DefaultEventHandler - Produce request with correlation id 874
> > failed due to [benchmark,43]: kafka.common.RequestTimedOutException
> > WARN  client.ClientUtils$ - Fetching topic metadata with correlation id
> 876
> > for topics [Set(benchmark)] from broker
> [id:2,host:10.80.42.156,port:9092]
> > failed
> > ERROR producer.SyncProducer - Producer connection to
> > 10.80.42.156:9092unsuccessful
> > kafka.common.FailedToSendMessageException: Failed to send messages after
> 0
> > tries.
> > WARN  async.DefaultEventHandler - Failed to send producer request with
> > correlation id 270 to broker 0 with data for partitions [benchmark,42]
> >
> > I think these are all types of exceptions i see there.
> > Thanks.
> >
> >
> > On Fri, Oct 25, 2013 at 2:45 PM, Guozhang Wang <[EMAIL PROTECTED]>
> wrote:
> >
> > > Kane,
> > >
> > > If you set message.send.max.retries to 0 it should be at-most-once,
> and I
> > > saw your props have the right config. What are the exceptions you got
> > from
> > > the send() call?
> > >
> > > Guozhang
> > >
> > >
> > > On Fri, Oct 25, 2013 at 12:54 PM, Steve Morin <[EMAIL PROTECTED]>
> > > wrote:
> > >
> > > > Kane and Aniket,
> > > >   I am interested in knowing what the pattern/solution that people
> > > usually
> > > > use to implement exactly once as well.
> > > > -Steve
> > > >
> > > >
> > > > On Fri, Oct 25, 2013 at 11:39 AM, Kane Kane <[EMAIL PROTECTED]>
> > > wrote:
> > > >
> > > > > Guozhang, but i've posted a piece from kafka documentation above:
> > > > > So effectively Kafka guarantees at-least-once delivery by default
> and
> > > > > allows the user to implement at most once delivery by disabling
> > retries
> > > > on
> > > > > the producer.
> > > > >
> > > > > What i want is at-most-once and docs claim it's possible with
> certain
> > > > > settings. Did i miss anything here?
> > > > >
> > > > >
> > > > > On Fri, Oct 25, 2013 at 11:35 AM, Guozhang Wang <
> [EMAIL PROTECTED]>
> > > > > wrote:
> > > > >
> > > > > > Aniket is exactly right. In general, Kafka provides "at least
> once"
> > > > > > guarantee instead of "exactly once".
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > >
> > > > > > On Fri, Oct 25, 2013 at 11:13 AM, Aniket Bhatnagar <
> > > > > > [EMAIL PROTECTED]> wrote:
> > > > > >
> > > > > > > As per my understanding, if the broker says the msg is
> committed,
> > > >  its
> > > > > > > guaranteed to have been committed as per ur ack config. If it
> > says
> > > it
> > > > > did
> > > > > > > not get committed, then its very hard to figure out if this was
> > > just
> > > > a
> > > > > > > false error. Since there is concept of unique ids for
> messages, a
> > > > > replay
> > > > > > of
> > > > > > > the same message will result in duplication. I think its a
> > > reasonable
> > > > > > > behaviour considering kafka prefers to append data to
> partitions
> > > fot
> > > > > > > performance reasons.
> > > > > > > The best way to right now deal with duplicate msgs is to build
> > the
> > > > > > > processing engine (layer where your consumer sits) to deal with

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB