Kafka, mail # user - Re: Purgatory - 2013-11-10, 17:15
 Search Hadoop and all its subprojects:

Switch to Threaded View
Marc, thanks much for documenting the guts!
There is one correction for Fetch Request handling:

When is it satisfied?

The fetch size requested is reached - ie. the amount of data the consumer
wishes to receive in one response

Consumer configuration: *fetch.message.max.bytes*
As per the code:

  /**
   * A holding pen for fetch requests waiting to be satisfied
   */
  class FetchRequestPurgatory(requestChannel: RequestChannel,
purgeInterval: Int)
          extends RequestPurgatory[DelayedFetch, Int](brokerId,
purgeInterval) {
    this.logIdent = "[FetchRequestPurgatory-%d] ".format(brokerId)

    /**
     * A fetch request is satisfied when it has accumulated enough data to
meet the min_bytes field
     */
    def checkSatisfied(messageSizeInBytes: Int, delayedFetch:
DelayedFetch): Boolean = {
      val accumulatedSize =
delayedFetch.bytesAccumulated.addAndGet(messageSizeInBytes)
      accumulatedSize >= delayedFetch.fetch.minBytes
    }

On Fri, Nov 8, 2013 at 1:01 PM, Joel Koshy <[EMAIL PROTECTED]> wrote:
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB