Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # dev >> Review Request 20240: Follow-up KAFKA-1352: Standardize stack trace printing in logs


Copy link to this message
-
Re: Review Request 20240: Follow-up KAFKA-1352: Standardize stack trace printing in logs

This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/20240/

(Updated April 11, 2014, 12:05 a.m.)
Review request for kafka.
Bugs: KAFKA-1352
    https://issues.apache.org/jira/browse/KAFKA-1352
Repository: kafka
Description (updated)

Standardize stack trace printing in logs:

This this the criterion I followed whenever we need to log with a thrown exception:

1. No stack trace below WARN

2. For WARN,

a) Print stack trace when calling library functions (i.e. non-kafka functions) or we captured Throwable which could be various exception types.

b) Print e.toSting (e.Class.Name + ":" + e.Message) when we captured Throwable but we could be sure about possible exception types.

c) Only print e.Message otherwise.

3. For ERROR, always print stack trace.

Also there are some mis-use of swallow, which should only be used when "we do not throw more exceptions but just log in with stack trace in whole", but not "when we really do not care if it throw any exceptions".
Moving forward, I would like to suggest some exception handling manners:

1. Only capture Throwable when 1) calling a library function whose throwable exceptions are not defined clearly, or 2) if we REALLY want to "swallow" any exceptions thrown. In the second case, we should not any exceptions thrown.

2. For our own scala classes, specify possible checked kafka exceptions in the comments as often as possible (for java classes do that in signatures). And then whenever possible, case-enumerate all possible exceptions and just log their error messages.

3. Only re-throw the same/another exception after logging when the function needs a return value and the thrown exception cause it to not able to return anything. And in this case, throw a specific exception and capture it the caller without further double-logging.
Diffs

  core/src/main/scala/kafka/client/ClientUtils.scala fc9e08423a4127e1d64be1e62def567ea9eb80a3
  core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala b9e2bea7b442a19bcebd1b350d39541a8c9dd068
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 1dde4fcdd7004af798e9eac8dde289575e99fd11
  core/src/main/scala/kafka/controller/ControllerChannelManager.scala c95c650cffbeed27e837e7c2d628f9026feb2c17
  core/src/main/scala/kafka/controller/KafkaController.scala 933de9dd324c7086efe6aa610335ef370d9e9c12
  core/src/main/scala/kafka/network/SocketServer.scala 4976d9c3a66bc965f5870a0736e21c7b32650bab
  core/src/main/scala/kafka/producer/Producer.scala 4798481d573bbdce0ba39035c50f4c4411ad0469
  core/src/main/scala/kafka/producer/SyncProducer.scala 489f0077512d9a69be81649c490274964290fa40
  core/src/main/scala/kafka/producer/async/DefaultEventHandler.scala d8ac915de31a26d7aa67760d69373975cacd0c9d
  core/src/main/scala/kafka/producer/async/ProducerSendThread.scala 42e9c741c2dcef756416832f11d37678cb7710ee
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 3b15254f32252cf824d7a292889ac7662d73ada1
  core/src/main/scala/kafka/server/KafkaApis.scala d96229e2d4aa7006b0dbd81055ce5a2459d8758c
  core/src/main/scala/kafka/utils/Utils.scala 6bfbac16e2f8d68b8c711a0336c698aa6f610ae8

Diff: https://reviews.apache.org/r/20240/diff/
Testing
Thanks,

Guozhang Wang