Based on the list above, we may be able to clear up the remaining jiras in roughly 3 weeks, so we can plan for a final release in a month or so. We would appreciate contributions and patches to close out the remaining jiras.
Thanks Neha On Thu, Sep 19, 2013 at 4:15 AM, Haithem Jarraya <[EMAIL PROTECTED]>wrote:
KAFKA-1008 has been checked into the 0.8 branch and needs to be manually double-committed to trunk. To avoid merging problems, I suggest that for all future changes in the 0.8 branch, we double commit them to trunk. Any objections?
Jun On Mon, Oct 7, 2013 at 5:33 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
Will the 0.8 release come from the HEAD of the 0.8 branch? I'd like to experiment with it, to see if it solves some of the issues I'm seeing, with consumers refusing to consume new messages. We've been using the beta1 version.
I remember mention there was a Jira issues along these lines, which was fixed post 0.8-beta1. Which issue was that (I'd like to see if it matches what I'm seeing).
Jason On Wed, Oct 9, 2013 at 8:04 PM, Jay Kreps <[EMAIL PROTECTED]> wrote:
The latest HEAD does seem to solve one issue, where a new topic being created after the consumer is started, would not be consumed.
But the bigger issue is that we have a couple different consumers both consuming the same set of topics (under different groupids), and hanging after a while (both hanging at about the same point). The topics in each case are selected with a filter (actually a relatively large number of topics, some of which are newly created over time). I'm still not sure whether the new version is solving this issue (since it was a rare transient thing anyway).
Jason On Sat, Oct 19, 2013 at 2:03 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
So here's an outline of what I think seems to have happened.
I have a consumer, that uses a filter to consume a large number of topics (e.g. several hundred). Each topic has only a single partition.
It normally has no trouble keeping up processing all messages on all topics. However, we had a case a couple days ago where it seemed to hang, and not consume anything for several hours. I restarted the consumer (and now I've updated it from 0.8-beta1 to 0.8-latest-HEAD). Data is flowing again, but some topics are seeming to take much longer than others to catch up. The slow ones seem to be the topics that have more data than others (a loose theory at present).
Does that make sense? If I understand things correctly, the consumer will fetch chunks of data from each topic/partition, in order, in a big loop? So if it has caught up with most of the topics, will it waste time re-polling all those (and getting nothing) before coming back to the topics that are lagging? Perhaps having a larger fetch size would help here?
Jason On Sat, Oct 19, 2013 at 6:24 PM, Jason Rosenberg <[EMAIL PROTECTED]> wrote:
Sounds good, yup! /******************************************* Joe Stein Founder, Principal Consultant Big Data Open Source Security LLC http://www.stealth.ly Twitter: @allthingshadoop ********************************************/ On Oct 24, 2013, at 1:12 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation projects and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext