Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> custom kafka consumer - strangeness


Copy link to this message
-
Re: custom kafka consumer - strangeness
What's the offset used in the fetch request in steps g and i that both
returned offsets 10 and 11?

Thanks,

Jun
On Sat, Jan 11, 2014 at 3:19 AM, Gerrit Jansen van Vuuren <
[EMAIL PROTECTED]> wrote:

> Hi,
>
>
> No the offsets are not the same. I've printed out the values to see this,
> and its not the case.
>
>
>
> On Fri, Jan 10, 2014 at 5:02 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
>
> > Are the offset used in the 2 fetch requests the same? If so, you will get
> > the same messages twice. You consumer is responsible for advancing the
> > offsets after consumption.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, Jan 9, 2014 at 1:00 PM, Gerrit Jansen van Vuuren <
> > [EMAIL PROTECTED]> wrote:
> >
> > > Hi,
> > >
> > > I'm writing a custom consumer for kafka 0.8.
> > > Everything works except for the following:
> > >
> > > a. connect, send fetch, read all results
> > > b. send fetch
> > > c. send fetch
> > > d. send fetch
> > > e. via the console publisher, publish 2 messages
> > > f. send fetch :corr-id 1
> > > g. read 2 messages published :offsets [10 11] :corr-id 1
> > > h. send fetch :corr-id 2
> > > i. read 2 messages published :offsets [10 11] :corr-id 2
> > > j.  send fetch ...
> > >
> > > The problem is I get the messages sent twice as a response to two
> > separate
> > > fetch requests. The correlation id is distinct so it cannot be that I
> > read
> > > the response twice. The offsets of the 2 messages are are the same so
> > they
> > > are duplicates, and its not the producer sending the messages twice.
> > >
> > > Note: the same connection is kept open the whole time, and I send
> > > block,receive then send again, after the first 2 messages are read, the
> > > offsets are incremented and the next fetch will ask kafka to give it
> > > messages from the new offsets.
> > >
> > > any ideas of why kafka would be sending the messages again on the
> second
> > > fetch request?
> > >
> > > Regards,
> > >  Gerrit
> > >
> >
>

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB