As the tools in 0.8 are not stable and we don't want to take the risk. we want to skip 0.8 and upgrade from beta1 to 0.81 directly. So my question is whether we can do an in place upgrade and let 0.81 use beta1's zk and kf data. Assume that we will disable log compaction. Thanks.
We migrated from 0.8.0 to 0.8.1 last week. We have a 15 broker cluster so it took a while to roll through them one by one. Once I finished I was finally able to complete a partition reassignment. I also had to do some manual cleanup, but Neha says it will be fixed soon:
Until then, if you have done any partition reassignment you will have to watch your brokers as they come up. They may fail and you will have to go delete the empty partition directories. On Thu, Dec 19, 2013 at 11:07 AM, Guozhang Wang <[EMAIL PROTECTED]> wrote:
This is the exception I ran into, I was able to fix it by deleting the /data/kafka/logs/Events2-124/ directory. That directory contained a non zero size index file and a zero size log file. I had a bunch of these directories scattered around the cluster.
[2013-12-18 02:40:37,163] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) java.lang.IllegalArgumentException: requirement failed: Corrupt index found, index file (/data/kafka/logs/Events2-124/00000000000000000000.index) has non-zero size but the last offset is 0 and the base offset is 0 at scala.Predef$.require(Predef.scala:145) at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:160) at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:159) at scala.collection.Iterator$class.foreach(Iterator.scala:631) at scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474) at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) at scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:495) at kafka.log.Log.loadSegments(Log.scala:159) at kafka.log.Log.<init>(Log.scala:64) at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:120) at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:115) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34) at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34) at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:115) at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:107) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34) at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) at kafka.log.LogManager.loadLogs(LogManager.scala:107) at kafka.log.LogManager.<init>(LogManager.scala:59) On Fri, Dec 20, 2013 at 9:06 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
This does include the patch of 1112, which means the issue is not fixed.
Drew, could you comment on KAFKA-1112 about how you can reproduce this issue so we can re-open it?
Guozhang On Mon, Dec 23, 2013 at 11:03 AM, Drew Goya <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation projects and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext