Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> custom kafka consumer - strangeness


Copy link to this message
-
Re: custom kafka consumer - strangeness
Do you have the request log turned on? If so, what's total time taken for
the corresponding fetch request?

Thanks,

Jun
On Sat, Jan 11, 2014 at 4:38 AM, Gerrit Jansen van Vuuren <
[EMAIL PROTECTED]> wrote:

> I'm also seeing the following.
>
> I consume the data in the queue.
> Then after 10 seconds send another fetch request (with the incremented
> offset), and never receives a response from the broker, my code eventually
> times out (after 30seconds).
>
> The broker writes Expiring fetch request Name: FetchRequest; Version: 0;
> CorrelationId: 1389443537; ClientId: 1; ReplicaId: -1; MaxWait: 1000 ms;
> MinBytes: 1 bytes; RequestInfo: [ping,0] ->
> PartitionFetchInfo(187,1048576).
>
> This corresponds with the timed out fetch request.
>
>
>
>
>
>
> On Sat, Jan 11, 2014 at 12:19 PM, Gerrit Jansen van Vuuren <
> [EMAIL PROTECTED]> wrote:
>
> > Hi,
> >
> >
> > No the offsets are not the same. I've printed out the values to see this,
> > and its not the case.
> >
> >
> >
> > On Fri, Jan 10, 2014 at 5:02 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
> >
> >> Are the offset used in the 2 fetch requests the same? If so, you will
> get
> >> the same messages twice. You consumer is responsible for advancing the
> >> offsets after consumption.
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >>
> >> On Thu, Jan 9, 2014 at 1:00 PM, Gerrit Jansen van Vuuren <
> >> [EMAIL PROTECTED]> wrote:
> >>
> >> > Hi,
> >> >
> >> > I'm writing a custom consumer for kafka 0.8.
> >> > Everything works except for the following:
> >> >
> >> > a. connect, send fetch, read all results
> >> > b. send fetch
> >> > c. send fetch
> >> > d. send fetch
> >> > e. via the console publisher, publish 2 messages
> >> > f. send fetch :corr-id 1
> >> > g. read 2 messages published :offsets [10 11] :corr-id 1
> >> > h. send fetch :corr-id 2
> >> > i. read 2 messages published :offsets [10 11] :corr-id 2
> >> > j.  send fetch ...
> >> >
> >> > The problem is I get the messages sent twice as a response to two
> >> separate
> >> > fetch requests. The correlation id is distinct so it cannot be that I
> >> read
> >> > the response twice. The offsets of the 2 messages are are the same so
> >> they
> >> > are duplicates, and its not the producer sending the messages twice.
> >> >
> >> > Note: the same connection is kept open the whole time, and I send
> >> > block,receive then send again, after the first 2 messages are read,
> the
> >> > offsets are incremented and the next fetch will ask kafka to give it
> >> > messages from the new offsets.
> >> >
> >> > any ideas of why kafka would be sending the messages again on the
> second
> >> > fetch request?
> >> >
> >> > Regards,
> >> >  Gerrit
> >> >
> >>
> >
> >
>

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB