Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> Socket timeouts in 0.8


+
Bob Jervis 2013-03-21, 19:46
+
Jun Rao 2013-03-22, 04:17
+
Bob Jervis 2013-03-22, 16:38
+
Jun Rao 2013-03-22, 17:08
+
Bob Jervis 2013-03-22, 18:01
Copy link to this message
-
Re: Socket timeouts in 0.8
I'm also seeing in the midst of the chaos (our app is generating 15GB of
logs), the following event on one of our borkers:

2013-03-22 17:43:39,257 FATAL kafka.server.KafkaApis: [KafkaApi-1] Halting
due to unrecoverable I/O error while handling produce request:
kafka.common.KafkaStorageException: I/O exception in append to log
'v1-english-8-0'
        at kafka.log.Log.append(Log.scala:218)
        at
kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:249)
        at
kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:242)
        at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
        at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
        at
scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:125)
        at
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
        at
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
        at
scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
        at scala.collection.immutable.HashMap.map(HashMap.scala:35)
        at kafka.server.KafkaApis.appendToLocalLog(KafkaApis.scala:242)
        at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:182)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:59)
        at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.nio.channels.ClosedChannelException
        at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
        at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:184)
        at
kafka.message.ByteBufferMessageSet.writeTo(ByteBufferMessageSet.scala:128)
        at kafka.log.FileMessageSet.append(FileMessageSet.scala:191)
        at kafka.log.LogSegment.append(LogSegment.scala:64)
        at kafka.log.Log.append(Log.scala:210)
        ... 14 more

On Fri, Mar 22, 2013 at 11:00 AM, Bob Jervis <[EMAIL PROTECTED]> wrote:

> I am getting the logs and I am trying to make sense of them.  I see a
> 'Received Request' log entry that appears to be what is coming in from our
> app.  I don't see any 'Completed Request' entries that correspond to those.
>  The only completed entries I see for the logs in question are from the
> replica-fetcher.
>
> It is as if our app is asking the wrong broker and getting no answer, but
> for some reason reporting it as a socket timeout.
>
> Broker 0 is getting and completing TopicMetadata requests in about 600
> milliseconds each.
> Broker 1 is not reporting ANY TopicMetadatRequests in the TRACE logs.
>
> Our app logs don't make any sense when I compare them to the broker logs
> and how can we be getting timeouts in less than 1000 milliseconds?
>
> Our app is reporting this:
>
> 2013-03-22 17:42:23,047 WARN kafka.producer.async.DefaultEventHandler:
> failed to send to broker 1 with data Map([v1-english-5,0] ->
> ByteBufferMessageSet(MessageAndOffset(Message(magic = 0, attributes = 0,
> crc = 2606857931, key = null, payload = java.nio.HeapByteBuffer[pos=0
> lim=1700 cap=1700]),0), MessageAndOffset(Message(magic = 0, attributes = 0,
> crc = 735213417, key = null, payload = java.nio.HeapByteBuffer[pos=0
> lim=1497 cap=1497]),1), MessageAndOffset(Message(magic = 0, attributes = 0,
> crc = 2435755724, key = null, payload = java.nio.HeapByteBuffer[pos=0
> lim=1494 cap=1494]),2), MessageAndOffset(Message(magic = 0, attributes = 0,
> crc = 202370440, key = null, paylo.....
> java.net.SocketTimeoutException
>         at
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
>         at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>         at
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:221)
>         at kafka.utils.Utils$.read(Utils.scala:372)
>         at
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>         at
> kafka.network.Receive$class.readCompletely(Transmission.scala:56)

 
+
Neha Narkhede 2013-03-22, 19:54
+
Bob Jervis 2013-03-22, 23:25
+
Bob Jervis 2013-03-22, 16:44
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB