Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # dev >> Re: [jira] Subscription: outstanding kafka patches

Copy link to this message
Re: [jira] Subscription: outstanding kafka patches
please note that https://issues.apache.org/jira/browse/KAFKA-296 is not yet
committed so I am removing it from the 0.7.1 release notes I am about to
send out with the release vote.

We should move it to 0.7.2 or have it go into 0.8 which is going to have to
have it change anyways for the new wire protocol?

On Mon, Jun 11, 2012 at 3:00 PM, <[EMAIL PROTECTED]> wrote:

> Issue Subscription
> Filter: outstanding kafka patches (32 issues)
> The list of outstanding kafka patches
> Subscriber: kafka-mailing-list
> Key         Summary
> KAFKA-346   Don't call commitOffsets() during rebalance
>            https://issues.apache.org/jira/browse/KAFKA-346
> KAFKA-345   Add a listener to ZookeeperConsumerConnector to get notified
> on rebalance events
>            https://issues.apache.org/jira/browse/KAFKA-345
> KAFKA-341   Create a new single host system test to validate all replicas
> on 0.8 branch
>            https://issues.apache.org/jira/browse/KAFKA-341
> KAFKA-337   upgrade ZKClient to allow conditional updates in ZK
>            https://issues.apache.org/jira/browse/KAFKA-337
> KAFKA-335   Implement an embedded controller
>            https://issues.apache.org/jira/browse/KAFKA-335
> KAFKA-329   Remove the watches/broker for new topics and partitions and
> change create topic admin API to send start replica state change to all
> brokers
>            https://issues.apache.org/jira/browse/KAFKA-329
> KAFKA-323   Add the ability to use the async producer in the Log4j appender
>            https://issues.apache.org/jira/browse/KAFKA-323
> KAFKA-319   compression support added to php client does not pass unit
> tests
>            https://issues.apache.org/jira/browse/KAFKA-319
> KAFKA-318   update zookeeper dependency to 3.3.5
>            https://issues.apache.org/jira/browse/KAFKA-318
> KAFKA-314   Go Client Multi-produce
>            https://issues.apache.org/jira/browse/KAFKA-314
> KAFKA-313   Add JSON output and looping options to ConsumerOffsetChecker
>            https://issues.apache.org/jira/browse/KAFKA-313
> KAFKA-312   Add 'reset' operation for AsyncProducerDroppedEvents
>            https://issues.apache.org/jira/browse/KAFKA-312
> KAFKA-306   broker failure system test broken on replication branch
>            https://issues.apache.org/jira/browse/KAFKA-306
> KAFKA-298   Go Client support max message size
>            https://issues.apache.org/jira/browse/KAFKA-298
> KAFKA-297   Go Client Publisher Improvments
>            https://issues.apache.org/jira/browse/KAFKA-297
> KAFKA-296   Update Go Client to new version of Go
>            https://issues.apache.org/jira/browse/KAFKA-296
> KAFKA-291   Add builder to create configs for consumer and broker
>            https://issues.apache.org/jira/browse/KAFKA-291
> KAFKA-273   Occassional GZIP errors on the server while writing compressed
> data to disk
>            https://issues.apache.org/jira/browse/KAFKA-273
> KAFKA-267   Enhance ProducerPerformance to generate unique random Long
> value for payload
>            https://issues.apache.org/jira/browse/KAFKA-267
> KAFKA-260   Add audit trail to kafka
>            https://issues.apache.org/jira/browse/KAFKA-260
> KAFKA-251   The ConsumerStats MBean's PartOwnerStats  attribute is a string
>            https://issues.apache.org/jira/browse/KAFKA-251
> KAFKA-246   log configuration values used
>            https://issues.apache.org/jira/browse/KAFKA-246
> KAFKA-242   Subsequent calls of ConsumerConnector.createMessageStreams
> cause Consumer offset to be incorrect
>            https://issues.apache.org/jira/browse/KAFKA-242
> KAFKA-196   Topic creation fails on large values
>            https://issues.apache.org/jira/browse/KAFKA-196
> KAFKA-191   Investigate removing the synchronization in Log.flush
>            https://issues.apache.org/jira/browse/KAFKA-191
> KAFKA-175   Add helper scripts to wrap the current perf tools
>            https://issues.apache.org/jira/browse/KAFKA-175
> KAFKA-173   Support encoding for non ascii characters
Joe Stein
Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>