Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # dev >> Review Request 24621: Patch for KAFKA-1507


Copy link to this message
-
Re: Review Request 24621: Patch for KAFKA-1507

This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24621/#review50500

clients/src/main/java/org/apache/kafka/common/errors/TopicExistsException.java
<https://reviews.apache.org/r/24621/#comment88323>

    This comment is misleading since the exception complains about an existing topic.
    
    Also, not sure why this should be a RetriableException since the only way to get out of this is to delete the topic, which is an admin operation.

clients/src/main/java/org/apache/kafka/common/errors/UnableToCreateTopicException.java
<https://reviews.apache.org/r/24621/#comment88324>

    How about renaming this to TopicCreationFailedException?
    
    It would also be helpful for the comment to provide details about which kind of failures can occur during topic creation.

clients/src/main/java/org/apache/kafka/common/protocol/Errors.java
<https://reviews.apache.org/r/24621/#comment88385>

    The doc here is incorrect.

clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java
<https://reviews.apache.org/r/24621/#comment88337>

    The doc string for "topics" can be improved -> "The list of topics to be created"

clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java
<https://reviews.apache.org/r/24621/#comment88339>

    same here

core/src/main/scala/kafka/common/UnableToCreateTopicException.scala
<https://reviews.apache.org/r/24621/#comment88329>

    Why is this one required if we already have one of these defined under clients?

core/src/main/scala/kafka/server/KafkaApis.scala
<https://reviews.apache.org/r/24621/#comment88383>

    we cannot have metadata request handling access zookeeper. In the past, we've gotten bitten by the kind of slowdown the zk access causes.
    
    Besides, I think this check is not required. In the case that the metadata cache is not updated but the topic is in zookeeper, we could just return UnknownTopicOrPartitionCode and explicitly state that create topic is not a sync operation.

core/src/main/scala/kafka/server/KafkaApis.scala
<https://reviews.apache.org/r/24621/#comment88387>

    Also I think we have to maintain the old config behavior for backwards compatibility and prevent making any changes to the old producer. We can auto.create off by default. That way folks using the old producer can continue using auto create as is. And those using new producer will have it turned off. I'm guessing we have to maintain this behavior at least for 1-2 releases.

core/src/main/scala/kafka/tools/GetOffsetShell.scala
<https://reviews.apache.org/r/24621/#comment88384>

    Same comment as in SimpleConsumerShell

core/src/main/scala/kafka/tools/SimpleConsumerShell.scala
<https://reviews.apache.org/r/24621/#comment88334>

    I think there are 2 kinds of error messages that could be useful depending on whether the topic exists or the topic metadata was incomplete/incorrect.
    
    I think it is useful to give an explicit error message saying "The topic %s doesn't exist" in case the error code is UnknownTopic...
- Neha Narkhede
On Aug. 13, 2014, 1:09 a.m., Sriharsha Chintalapani wrote: