Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # dev >> Log4jAppender backoff if server is down


+
David Arthur 2012-07-31, 20:09
+
Neha Narkhede 2012-07-31, 21:48
Copy link to this message
-
Re: Log4jAppender backoff if server is down
Here are some log snippets:

Kafka server logs: https://gist.github.com/c440ada8daa629e337e2
Solr logs: https://gist.github.com/42624c901fc7967fd137

In this case, I am sending all the "org.apache.solr" logs to Kafka, so each document update in Solr produces a log message. Each update to Solr produced an exception like this which caused things to slow way down.

My earlier statement about Kafka being up but unable to write logs was incorrect. During this, it seems Kafka was simply down (our supervisor that restarts Kafka gave up after a few tries). So in the case that Kafka is down, what should the client behavior be like?

Ideally, to me, the client could know about what brokers are available through ZK watches and just refuse to attempt a send/produce if no one is available.

What do you guys think? I'm not saying this is necessarily a Kafka issue, I'm just not sure what's the best thing to do here.

Cheers
-David

On Jul 31, 2012, at 5:48 PM, Neha Narkhede wrote:

> David,
>
> Would you mind sending around the error stack traces ? That will help
> determine the right fix.
>
> Thanks,
> Neha
>
> On Tue, Jul 31, 2012 at 1:09 PM, David Arthur <[EMAIL PROTECTED]> wrote:
>> Greetings all,
>>
>> I'm using the KafkaLog4jAppender with Solr and ran into an interesting issue recently. The disk filled up on my Kafka broker (just a single broker, this is a dev environment) and Solr slowed down to a near halt. My best estimation is that each log4j log message created was incurring quite a bit of overhead dealing with exceptions coming back from the Kafka broker.
>>
>> So I'm wondering, would it make sense to implement some back off strategy for this client if it starts getting exceptions from the server? Alternatively, could maybe the Kafka broker mark it self as "down" in ZooKeeper if it gets into certain situations (like disk full). I guess this really could apply to any client, not just the log4j appender.
>>
>> Thanks!
>> -David

+
Neha Narkhede 2012-08-01, 16:07
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB