-Re: Consumer behavior when message exceeds fetch.message.max.bytes
Jun Rao 2013-08-01, 15:14
Yes. It's good to enforce that. Could you file a jira and attach your patch
On Thu, Aug 1, 2013 at 7:39 AM, Sam Meder <[EMAIL PROTECTED]>wrote:
> Seems like a good idea to enforce this? Maybe something like this:
> diff --git a/core/src/main/scala/kafka/server/KafkaConfig.scala
> index a64b210..1c3bfdd 100644
> --- a/core/src/main/scala/kafka/server/KafkaConfig.scala
> +++ b/core/src/main/scala/kafka/server/KafkaConfig.scala
> @@ -198,7 +198,7 @@ class KafkaConfig private (val props:
> VerifiableProperties) extends ZKConfig(pro
> val replicaSocketReceiveBufferBytes =
> /* the number of byes of messages to attempt to fetch */
> - val replicaFetchMaxBytes = props.getInt(ReplicaFetchMaxBytesProp,
> + val replicaFetchMaxBytes =
> props.getIntInRange(ReplicaFetchMaxBytesProp, ConsumerConfig.FetchSize,
> (messageMaxBytes, Int.MaxValue))
> /* max wait time for each fetcher request issued by follower replicas*/
> val replicaFetchWaitMaxMs = props.getInt(ReplicaFetchWaitMaxMsProp, 500)
> Not sure is message.max.bytes only counts payload or whole message + any
> headers, so it may be that it should be a bit larger even.
> On Aug 1, 2013, at 7:04 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
> > server: replica.fetch.max.bytes should be >= message.max.bytes.
> > the follower will get stuck when replicating data from the leader.
> > Thanks,
> > Jun
> > On Wed, Jul 31, 2013 at 10:10 AM, Sam Meder <[EMAIL PROTECTED]
> >> I also noticed that there are two properties related to messages size on
> >> the server: replica.fetch.max.bytes and message.max.bytes. What happens
> >> when replica.fetch.max.bytes is lower than message.max.bytes? Should
> >> even be two properties?
> >> /Sam
> >> On Jul 31, 2013, at 5:25 PM, Sam Meder <[EMAIL PROTECTED]>
> >>> We're expecting to occasionally have to deal with pretty large messages
> >> being sent to Kafka. We will of course set the fetch size appropriately
> >> high, but are concerned about the behavior when the message exceeds the
> >> fetch size. As far as I can tell the current behavior when a message
> >> is too large is encountered is to pretend it is not there and not notify
> >> the consumer in any way. IMO it would be better to throw an exception
> >> silently ignoring the issue (with the current code one can't really
> >> distinguish a large message from no data at all).
> >>> Thoughts?
> >>> /Sam