Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> custom kafka consumer - strangeness


Copy link to this message
-
Re: custom kafka consumer - strangeness
Hi,

I've finally fixed this by closing the connection on timeout and creating a
new connection on the next send.

Thanks,
 Gerrit
On Tue, Jan 14, 2014 at 10:20 AM, Gerrit Jansen van Vuuren <
[EMAIL PROTECTED]> wrote:

> Hi,
>
> thanks I will do this.
>
>
>
> On Tue, Jan 14, 2014 at 9:51 AM, Joe Stein <[EMAIL PROTECTED]> wrote:
>
>> I Gerrit, do you have a ticket already for this issue?  Is it possible to
>> attach code that reproduces it?  Would be great if you can run it against
>> a
>> Kafka VM you can grab one from this project for 0.8.0
>> https://github.com/stealthly/scala-kafka to launch a Kafka VM and add
>> whatever you need to it to reproduce the issue or from
>> https://issues.apache.org/jira/browse/KAFKA-1173 for 0.8.1.  I think if
>> you
>> can reproduce it in an environment comfortably that is in a controlled
>> isolation that would be helpful for folks to reproduce and work towards
>> resolution.... At least if it is a bug we can get a detailed capture of
>> what the bug is in the JIRA ticket and start discussing how to fix it.
>>
>> /*******************************************
>>  Joe Stein
>>  Founder, Principal Consultant
>>  Big Data Open Source Security LLC
>>  http://www.stealth.ly
>>  Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
>> ********************************************/
>>
>>
>> On Tue, Jan 14, 2014 at 3:38 AM, Gerrit Jansen van Vuuren <
>> [EMAIL PROTECTED]> wrote:
>>
>> > Yes, I'm using my own client following:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>> >
>> > Everything works except for this weirdness.
>> >
>> >
>> > On Tue, Jan 14, 2014 at 5:50 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
>> >
>> > > So, you implemented your own consumer client using netty?
>> > >
>> > > Thanks,
>> > >
>> > > Jun
>> > >
>> > >
>> > > On Mon, Jan 13, 2014 at 8:42 AM, Gerrit Jansen van Vuuren <
>> > > [EMAIL PROTECTED]> wrote:
>> > >
>> > > > I'm using netty and async write, read.
>> > > > For read I used a timeout such that if I do not see anything on the
>> > read
>> > > > channel, my read function times out and returns null.
>> > > > I do not see any error on the socket, and the same socket is used
>> > > > throughout all of the fetches.
>> > > >
>> > > > I'm using the console producer and messages are "11", "22", "abc",
>> > "iiii"
>> > > > etc.
>> > > >
>> > > > I can reliably reproduce it every time.
>> > > >
>> > > > Its weird yes, no compression is used, the timeout happens for the
>> same
>> > > > scenario every time.
>> > > >
>> > > >
>> > > >
>> > > > On Mon, Jan 13, 2014 at 4:44 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
>> > > >
>> > > > > I can't seen to find the log trace for the timed out fetch request
>> > > (every
>> > > > > fetch request seems to have a corresponding completed entry). For
>> the
>> > > > timed
>> > > > > out fetch request, is it that the broker never completed the
>> request
>> > or
>> > > > is
>> > > > > it that it just took longer than the socket timeout to finish
>> > > processing
>> > > > > the request? Do you use large messages in your test?
>> > > > >
>> > > > > If you haven't enabled compression, it's weird that you will
>> re-get
>> > 240
>> > > > and
>> > > > > 241 with an offset of 242 in the fetch request. Is that easily
>> > > > > reproducible?
>> > > > >
>> > > > > Thanks,
>> > > > >
>> > > > > Jun
>> > > > >
>> > > > >
>> > > > > On Mon, Jan 13, 2014 at 1:26 AM, Gerrit Jansen van Vuuren <
>> > > > > [EMAIL PROTECTED]> wrote:
>> > > > >
>> > > > > > Hi,
>> > > > > >
>> > > > > > the offset in g is 240, and in i 242, the last message read was
>> at
>> > > > offset
>> > > > > > 239.
>> > > > > >
>> > > > > > After reading from 0 - 239, I make another request for 240, this
>> > > > request
>> > > > > > timesout and never returns.
>> > > > > > I then manually add 2 entries via the console producer, all the
>> > time
>> > > > > while
>> > > > > > making a request for 240 every 10 seconds, all subsequent

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB