Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33


Copy link to this message
-
Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33
If a broker is the leader of multiple partitions of a topic, the high level
consumer will fetch all those partitions in a single fetch request. Then
the aggregate of the fetched data from multiple partitions could be more
than 2GB.

You can try using more consumers in the same consumer group to reduce #
partitions fetched per consumer.

Thanks,

Jun
On Thu, Jan 2, 2014 at 8:52 AM, Gerrit Jansen van Vuuren <
[EMAIL PROTECTED]> wrote:

> no I can't :(,  I upped it because some of the messages can be big. The
> question still remains that 600mb is far from the 2gig int limit, is there
> any reason why 600mb max size would cause the fecth buffer to overflow?
>
>
>
> On Thu, Jan 2, 2014 at 5:19 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
>
> > Could you reduce the max message size? Do you really expect to have a
> > single message of 600MB? After that, you can reduce the fetch size.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, Jan 2, 2014 at 8:06 AM, Gerrit Jansen van Vuuren <
> > [EMAIL PROTECTED]> wrote:
> >
> > > There is a particular topic that has allot of data in each message,
> there
> > > is nothing I can do about it.
> > > Because I have so much data I try to split the data over 8-12
> partitions,
> > > if I reduce the partitions I won't have enough consumers to consume the
> > > data in time.
> > >
> > >
> > > On Thu, Jan 2, 2014 at 4:50 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
> > >
> > > > 600mb for fetch size is considerably larger than the default size. Is
> > > there
> > > > a particular reason for this? Also, how many partitions do you have?
> > You
> > > > may have to reduce the fetch size further if there are multiple
> > > partitions.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Thu, Jan 2, 2014 at 2:42 AM, Gerrit Jansen van Vuuren <
> > > > [EMAIL PROTECTED]> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I just double checked my configuration and the broker has
> > > > message.max.bytes
> > > > > set to 1 gig, the consumers have the same setting for max fetch
> size.
> > > > I've
> > > > > lowered this to 600 mb and still see the same error :(,
> > > > >
> > > > > at the moment kafka is un-usable for me, the the only other
> > alternative
> > > > is
> > > > > writing my own client (as i'm doing with the producer), what a
> pain!
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Jan 1, 2014 at 6:45 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
> > > > >
> > > > > > In our wire protocol, we expect the first 4 bytes for a response
> to
> > > be
> > > > > its
> > > > > > size. If the actual size is larger than 2GB, what's stored in the
> > > > those 4
> > > > > > bytes is the overflowed value. This could cause some of the
> buffer
> > > size
> > > > > to
> > > > > > be smaller than it should be later on. If #partitions *
> fetch_size
> > >  is
> > > > > > larger than 2GB in a single fetch request, you could hit this
> > > problem.
> > > > > You
> > > > > > can try reducing the fetch size. Ideally, the sender should catch
> > > this
> > > > > and
> > > > > > throw an exception, which we don't do currently.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > >
> > > > > > On Wed, Jan 1, 2014 at 9:27 AM, Gerrit Jansen van Vuuren <
> > > > > > [EMAIL PROTECTED]> wrote:
> > > > > >
> > > > > > > Mm... Could be Im not sure if in a single request though. I am
> > > moving
> > > > > > allot
> > > > > > > of data. Any pointer at were in the code the overflow might
> > start?
> > > > > > > On 1 Jan 2014 18:13, "Jun Rao" <[EMAIL PROTECTED]> wrote:
> > > > > > >
> > > > > > > > Are you fetching more than 2GB of data in a single fetch
> > response
> > > > > > (across
> > > > > > > > all partitions)? Currently, we don't handle integer overflow
> > > > > properly.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > >
> > > > > > > > Jun
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> > > > > > > > [EMAIL PROTECTED]> wrote:

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB