Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> producer exceptions when broker dies


Copy link to this message
-
Re: producer exceptions when broker dies
There are a lot of exceptions, I will try to pick an example of each:
ERROR async.DefaultEventHandler - Failed to send requests for topics
benchmark with correlation ids in [879,881]
WARN  async.DefaultEventHandler - Produce request with correlation id 874
failed due to [benchmark,43]: kafka.common.RequestTimedOutException
WARN  client.ClientUtils$ - Fetching topic metadata with correlation id 876
for topics [Set(benchmark)] from broker [id:2,host:10.80.42.156,port:9092]
failed
ERROR producer.SyncProducer - Producer connection to
10.80.42.156:9092unsuccessful
kafka.common.FailedToSendMessageException: Failed to send messages after 0
tries.
WARN  async.DefaultEventHandler - Failed to send producer request with
correlation id 270 to broker 0 with data for partitions [benchmark,42]

I think these are all types of exceptions i see there.
Thanks.
On Fri, Oct 25, 2013 at 2:45 PM, Guozhang Wang <[EMAIL PROTECTED]> wrote:

> Kane,
>
> If you set message.send.max.retries to 0 it should be at-most-once, and I
> saw your props have the right config. What are the exceptions you got from
> the send() call?
>
> Guozhang
>
>
> On Fri, Oct 25, 2013 at 12:54 PM, Steve Morin <[EMAIL PROTECTED]>
> wrote:
>
> > Kane and Aniket,
> >   I am interested in knowing what the pattern/solution that people
> usually
> > use to implement exactly once as well.
> > -Steve
> >
> >
> > On Fri, Oct 25, 2013 at 11:39 AM, Kane Kane <[EMAIL PROTECTED]>
> wrote:
> >
> > > Guozhang, but i've posted a piece from kafka documentation above:
> > > So effectively Kafka guarantees at-least-once delivery by default and
> > > allows the user to implement at most once delivery by disabling retries
> > on
> > > the producer.
> > >
> > > What i want is at-most-once and docs claim it's possible with certain
> > > settings. Did i miss anything here?
> > >
> > >
> > > On Fri, Oct 25, 2013 at 11:35 AM, Guozhang Wang <[EMAIL PROTECTED]>
> > > wrote:
> > >
> > > > Aniket is exactly right. In general, Kafka provides "at least once"
> > > > guarantee instead of "exactly once".
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Fri, Oct 25, 2013 at 11:13 AM, Aniket Bhatnagar <
> > > > [EMAIL PROTECTED]> wrote:
> > > >
> > > > > As per my understanding, if the broker says the msg is committed,
> >  its
> > > > > guaranteed to have been committed as per ur ack config. If it says
> it
> > > did
> > > > > not get committed, then its very hard to figure out if this was
> just
> > a
> > > > > false error. Since there is concept of unique ids for messages, a
> > > replay
> > > > of
> > > > > the same message will result in duplication. I think its a
> reasonable
> > > > > behaviour considering kafka prefers to append data to partitions
> fot
> > > > > performance reasons.
> > > > > The best way to right now deal with duplicate msgs is to build the
> > > > > processing engine (layer where your consumer sits) to deal with at
> > > least
> > > > > once semantics of the broker.
> > > > > On 25 Oct 2013 23:23, "Kane Kane" <[EMAIL PROTECTED]> wrote:
> > > > >
> > > > > > Or, to rephrase it more generally, is there a way to know exactly
> > if
> > > > > > message was committed or no?
> > > > > >
> > > > > >
> > > > > > On Fri, Oct 25, 2013 at 10:43 AM, Kane Kane <
> [EMAIL PROTECTED]
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Hello Guozhang,
> > > > > > >
> > > > > > > My partitions are split almost evenly between broker, so, yes -
> > > > broker
> > > > > > > that I shutdown is the leader for some of them. Does it mean i
> > can
> > > > get
> > > > > an
> > > > > > > exception and data is still being written? Is there any setting
> > on
> > > > the
> > > > > > > broker where i can control this? I.e. can i make broker
> > replication
> > > > > > timeout
> > > > > > > shorter than producer timeout, so i can ensure if i get an
> > > exception
> > > > > data
> > > > > > > is not being committed?
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Oct 25, 2013 at 10:36 AM, Guozhang Wang <

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB