Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Socket timeouts in 0.8


Copy link to this message
-
Re: Socket timeouts in 0.8
We've made some progress in our testing.  While I do not have a good
explanation for all the better behavior today, we have been able to move a
substantial number of messages through the system today without any
exceptions (> 800K messages).

The big things between last night's mess and today was: 1. I moved the
Kafka log dir (the segment files) to a separate drive from the system
drive), and 2. I rudeced the number of network and io threads back down to
2 each.

We also found a (probably) unrelated bug where we were getting the broker 0
and broker 1 host name mappings swapped (something about Zookeeper
returning children in any old order), so we weren't asking for topic
offsets from the correct broker.  The code worked fine when there was only
one broker, but in a multi-broker cluster, we got bogus results.

Thanks for all the help,
Bob
On Fri, Mar 22, 2013 at 11:27 AM, Bob Jervis <[EMAIL PROTECTED]> wrote:

> I'm also seeing in the midst of the chaos (our app is generating 15GB of
> logs), the following event on one of our borkers:
>
> 2013-03-22 17:43:39,257 FATAL kafka.server.KafkaApis: [KafkaApi-1] Halting
> due to unrecoverable I/O error while handling produce request:
> kafka.common.KafkaStorageException: I/O exception in append to log
> 'v1-english-8-0'
>         at kafka.log.Log.append(Log.scala:218)
>         at
> kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:249)
>         at
> kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:242)
>         at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>         at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>         at
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:125)
>         at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
>         at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
>         at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
>         at scala.collection.immutable.HashMap.map(HashMap.scala:35)
>         at kafka.server.KafkaApis.appendToLocalLog(KafkaApis.scala:242)
>         at
> kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:182)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:59)
>         at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.nio.channels.ClosedChannelException
>         at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
>         at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:184)
>         at
> kafka.message.ByteBufferMessageSet.writeTo(ByteBufferMessageSet.scala:128)
>         at kafka.log.FileMessageSet.append(FileMessageSet.scala:191)
>         at kafka.log.LogSegment.append(LogSegment.scala:64)
>         at kafka.log.Log.append(Log.scala:210)
>         ... 14 more
>
>
>
> On Fri, Mar 22, 2013 at 11:00 AM, Bob Jervis <[EMAIL PROTECTED]> wrote:
>
>> I am getting the logs and I am trying to make sense of them.  I see a
>> 'Received Request' log entry that appears to be what is coming in from our
>> app.  I don't see any 'Completed Request' entries that correspond to those.
>>  The only completed entries I see for the logs in question are from the
>> replica-fetcher.
>>
>> It is as if our app is asking the wrong broker and getting no answer, but
>> for some reason reporting it as a socket timeout.
>>
>> Broker 0 is getting and completing TopicMetadata requests in about 600
>> milliseconds each.
>> Broker 1 is not reporting ANY TopicMetadatRequests in the TRACE logs.
>>
>> Our app logs don't make any sense when I compare them to the broker logs
>> and how can we be getting timeouts in less than 1000 milliseconds?
>>
>> Our app is reporting this:
>>
>> 2013-03-22 17:42:23,047 WARN kafka.producer.async.DefaultEventHandler:
>> failed to send to broker 1 with data Map([v1-english-5,0] ->

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB