Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # dev >> [jira] [Commented] (KAFKA-695) Broker shuts down due to attempt to read a closed index file


+
Jun Rao 2013-01-24, 19:31
Copy link to this message
-
[jira] [Commented] (KAFKA-695) Broker shuts down due to attempt to read a closed index file

    [ https://issues.apache.org/jira/browse/KAFKA-695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13552342#comment-13552342 ]

Jun Rao commented on KAFKA-695:
-------------------------------

The weird thing is that the file actually exists on disk (this is a low volume topic that has only 1 segment) and there was no logging that shows the segment has been deleted.
                
> Broker shuts down due to attempt to read a closed index file
> ------------------------------------------------------------
>
>                 Key: KAFKA-695
>                 URL: https://issues.apache.org/jira/browse/KAFKA-695
>             Project: Kafka
>          Issue Type: Bug
>          Components: log
>    Affects Versions: 0.8
>            Reporter: Neha Narkhede
>            Priority: Blocker
>
> Broker shuts down with the following error message -
> 013/01/11 01:43:51.320 ERROR [KafkaApis] [request-expiration-task] [kafka] []  [KafkaApi-277] error when processing request (service_metrics,2,39192,2000000)
> java.nio.channels.ClosedChannelException
>         at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
>         at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:613)
>         at kafka.log.FileMessageSet.searchFor(FileMessageSet.scala:82)
>         at kafka.log.LogSegment.translateOffset(LogSegment.scala:76)
>         at kafka.log.LogSegment.read(LogSegment.scala:106)
>         at kafka.log.Log.read(Log.scala:386)
>         at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:369)
>         at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:327)
>         at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:323)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>         at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
>         at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
>         at scala.collection.immutable.Map$Map1.map(Map.scala:93)
>         at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:323)
>         at kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:519)
>         at kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:501)
>         at kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:222)
>         at java.lang.Thread.run(Thread.java:619)
> 2013/01/11 01:43:52.815 INFO [Processor] [kafka-processor-10251-2] [kafka] []  Closing socket connection to /172.20.72.244.
> 2013/01/11 01:43:54.286 INFO [Processor] [kafka-processor-10251-3] [kafka] []  Closing socket connection to /172.20.72.243.
> 2013/01/11 01:43:54.385 ERROR [LogManager] [kafka-logflusher-1] [kafka] []  [Log Manager on Broker 277] Error flushing topic service_metrics
> java.nio.channels.ClosedChannelException
>         at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
>         at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:349)
>         at kafka.log.FileMessageSet$$anonfun$flush$1.apply$mcV$sp(FileMessageSet.scala:154)
>         at kafka.log.FileMessageSet$$anonfun$flush$1.apply(FileMessageSet.scala:154)
>         at kafka.log.FileMessageSet$$anonfun$flush$1.apply(FileMessageSet.scala:154)
>         at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>         at kafka.log.FileMessageSet.flush(FileMessageSet.scala:153)
>         at kafka.log.LogSegment.flush(LogSegment.scala:151)
>         at kafka.log.Log.flush(Log.scala:493)
>         at kafka.log.LogManager$$anonfun$kafka$log$LogManager$$flushDirtyLogs$2.apply(LogManager.scala:319)
>         at kafka.log.LogManager$$anonfun$kafka$log$LogManager$$flushDirtyLogs$2.apply(LogManager.scala:310)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:631)

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

 
+
Jay Kreps 2013-01-28, 04:21
+
Neha Narkhede 2013-02-01, 18:16
+
Jun Rao 2013-02-05, 16:48
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB