Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> kafka.common.FailedToSendMessageException - 0.8


+
Ran RanUser 2013-06-20, 07:09
+
Marc Labbe 2013-06-20, 12:49
+
Jun Rao 2013-06-20, 15:02
+
Ran RanUser 2013-06-23, 07:14
+
Yogesh Sangvikar 2013-06-25, 04:21
+
Jun Rao 2013-06-25, 04:36
+
Yogesh Sangvikar 2013-06-25, 05:19
+
Jun Rao 2013-06-25, 16:39
+
Markus Roder 2013-06-25, 05:56
+
Florin Trofin 2013-06-25, 07:12
+
Jonathan Hodges 2013-06-25, 10:58
Copy link to this message
-
Re: kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
We are able to telnet to each of the Kafka nodes from the producer so it
doesn't appear to be a connectivity issue.

DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.168 9092
Trying x.x.x.168...
Connected to x.x.x.168.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.48 9092
Trying x.x.x.48...
Connected to x.x.x.48.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.234 9092
Trying x.x.x.234...
Connected to x.x.x.234.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.121 9092
Trying x.x.x.121...
Connected to x.x.x.121.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.236 9092
Trying x.x.x.236...
Connected to x.x.x.236.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$
On Tue, Jun 25, 2013 at 4:57 AM, Jonathan Hodges <[EMAIL PROTECTED]> wrote:

> Hi Florin,
>
> I work with Yogesh so it is interesting you mention the
> 'metadata.broker.list' property as this was the first error message we saw.
>  Consider the following producer code.
>
> Properties props = new Properties();
> props.put("broker.list", "x.x.x.x:9092, x.x.x.x :9092, x.x.x.x :9092,
> x.x.x.x :9092, x.x.x.x :9092");
> props.put("producer.type", "sync");
> props.put("compression.codec", "2");  //snappy
> ProducerConfig config = new ProducerConfig(props);
> producer = new Producer<byte[], byte[]>(config);
>
> This returns the following exception for the required, property
> 'metadata.broker.list'.
>
> java.lang.IllegalArgumentException: requirement failed: Missing required
> property 'metadata.broker.list'
> at scala.Predef$.require(Predef.scala:145)
>  at
> kafka.utils.VerifiableProperties.getString(VerifiableProperties.scala:158)
> at kafka.producer.ProducerConfig.<init>(ProducerConfig.scala:66)
>  at kafka.producer.ProducerConfig.<init>(ProducerConfig.scala:56)
> at com.pearson.firehose.KafkaProducer.<init>(KafkaProducer.java:21)
>  at com.pearson.firehose.KafkaProducer.main(KafkaProducer.java:40)
>
> So we just added 'metadata' prefix to the above 'broker.list' property and
> this fixed this exception.  However this is where we start to see this
> producer retries error in the logs.  Could there be some problem with the
> value we are using for 'metadata.broker.list' which is preventing the
> producer from connecting?
>
> Thanks,
> Jonathan
>
>
>
> On Tue, Jun 25, 2013 at 1:12 AM, Florin Trofin <[EMAIL PROTECTED]> wrote:
>
>> I got the same error but I think I had a different issue than you: My code
>> was written for kafka 0.7 and when I switched to 0.8 I changed the
>> "zk.connect" property to "metadata.broker.list" but left it with the same
>> value (which was of course the zookeeper's host and port). In other words
>> a "pilot error" :-) The snippet you provided doesn't seem to have this
>> problem, but it is interesting that we got the same error (which would be
>> nice if it can be customized depending on the actual problem: host
>> unreachable, not responding, etc)
>>
>> F.
>>
>> On 6/24/13 10:55 PM, "Markus Roder" <[EMAIL PROTECTED]> wrote:
>>
>> >We had this issue as well but never the less the message was enqueued
>> >four times into the cluster. It would be great to get any hint on this
>> >issue.
>> >
>> >regards
>> >
>> >--
>> >Markus Roder
>> >
>> >Am 25.06.2013 um 07:18 schrieb Yogesh Sangvikar
>> ><[EMAIL PROTECTED]>:
>> >
>> >> Hi Jun,
>> >>
>> >> The stack trace we found is as follow,
>> >>
>> >> log4j:WARN No appenders could be found for logger
>> >> (kafka.utils.VerifiableProperties).
>> >> log4j:WARN Please initialize the log4j system properly.
>> >> kafka.common.FailedToSendMessageException: Failed to send messages
>> >>after 3
>> >> tries.
>> >>        at
>> >>
>>
>> >>kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala
>> >>:90)
>> >>        at kafka.producer.Producer.send(Producer.scala:74)

 
+
Yogesh Sangvikar 2013-06-25, 14:38
+
Florin Trofin 2013-06-25, 17:12
+
Yogesh Sangvikar 2013-06-26, 09:13
+
Jun Rao 2013-06-25, 16:42
+
Kalpa 1977 2014-07-07, 16:19
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB