Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # dev >> Re: Review Request 18102: New Producer Integration Test Phase II


Copy link to this message
-
Re: Review Request 18102: New Producer Integration Test Phase II

This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18102/#review35370

clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java
<https://reviews.apache.org/r/18102/#comment65847>

    Why are we using instanceof instead of catching the individual exception types?
    
    Also why are we wrapping RuntimeExceptions in KafkaException?
    
    Basically I don't think I follow the rationale here.

clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java
<https://reviews.apache.org/r/18102/#comment65848>

    You seem to have removed the close on the accumulator is this an accident?

clients/src/main/java/org/apache/kafka/clients/producer/internals/FutureRecordMetadata.java
<https://reviews.apache.org/r/18102/#comment65849>

    Hmm so we need to make a new atomic reference now? This seems much much worse then just recreating the RecordMetadata instance, no?

clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
<https://reviews.apache.org/r/18102/#comment65850>

    I'm not sold that this refactor actually made things better. Now everything goes through future.get which is needlessly synchronized, right?

clients/src/main/java/org/apache/kafka/common/errors/FatalException.java
<https://reviews.apache.org/r/18102/#comment65851>

    The current logic is that all exceptions that don't implement RetriableException are fatal so I don't think we need this. Originally I had thought to use RetriableException to mark an idempotent exception but the current usage is actually for any non-fatal exception. The rationale was (1) We shouldn't end up accidentally retrying if we through an unexpected error (out of memory or null pointer or something) and (2) when we have the idempotent producer retry will always be idempotent so modeling that in the exception hiearchy is premature.

clients/src/main/java/org/apache/kafka/common/protocol/Errors.java
<https://reviews.apache.org/r/18102/#comment65852>

    How come you are removing this? Is it not an exception that can occur on the server?

core/src/main/scala/kafka/server/KafkaApis.scala
<https://reviews.apache.org/r/18102/#comment65854>

    Not sure I grok the changes here, may need to chat about it...

core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
<https://reviews.apache.org/r/18102/#comment65855>

    This package needs to match the directory. Same with the other integration test. You have it in kafka.api.* but declare kafka.test.*

core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
<https://reviews.apache.org/r/18102/#comment65862>

    Does it make sense to combine these all? wouldn't it be easier to just have 4 test cases?
    
    That is usually easier to reason about and understand when they fail.

core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
<https://reviews.apache.org/r/18102/#comment65856>

    Can you make a method for this. It can be a local method for now.
      def makeProducer(brokers, acks, fetchTimeout) or whatever...

core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
<https://reviews.apache.org/r/18102/#comment65858>

    There is no need to catch the exception and fail, an uncaught exception will always fail the test.

core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
<https://reviews.apache.org/r/18102/#comment65859>

    If you expect a particular exception use the scalatest utility intercept. It looks like this:
    
    intercept[ExecutionException]{
      // do something that should throw that exception
    }
    
    Much easier to read.

core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
<https://reviews.apache.org/r/18102/#comment65860>

    Ditto

core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
<https://reviews.apache.org/r/18102/#comment65861>

    Ditto

core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
<https://reviews.apache.org/r/18102/#comment65857>

    It might be simpler to do something like
    if(producer1 != null) producer1.close()
    in the tearDown() method of the test to avoid the nested try/catch.

core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
<https://reviews.apache.org/r/18102/#comment65863>

    Would it make sense to break these into separate test methods?
- Jay Kreps
On Feb. 20, 2014, 11:45 p.m., Guozhang Wang wrote: