Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Heartbeat btw producer and broker


Copy link to this message
-
Re: Heartbeat btw producer and broker
You are probably right. Though we introduced that reconnect functionality
to get around the VIP idle connection issue, it may not solve the problem
entirely. Your fix makes sense.

Thanks,
Neha
On Thu, Sep 26, 2013 at 12:00 AM, Rhapsody <[EMAIL PROTECTED]> wrote:

> Thank you for reply, Neha
>
> Kafka executes the reconnecting logic as 'reconnect.time.interval.ms'
> property after running 'send.writeCompletely(channel)' (See, 'send' method
> in SyncProducer.scala (88th line))
>
> An exception occurs at SocketChannel.write (at
> BoundedByteBufferSend.writeTo)
>
> So, I'll meet the problem regardless of 'reconnect.time.interval.ms'
> value.
>
>
> ps.
> I think the reconnection logic should be placed before sending a message...
> in the getOrMakeConnection() method which has a reponsibility to manage
> connection.
>
>
>
> On Mon, Sep 23, 2013 at 11:17 PM, Neha Narkhede <[EMAIL PROTECTED]
> >wrote:
>
> > You can configure the producer to reconnect to the brokers and set the
> > reconnect interval to less than an hour. The config that controls this is
> > reconnect.time.interval.ms.
> >
> > Thanks,
> > Neha
> > On Sep 23, 2013 12:14 AM, "Rhapsody" <[EMAIL PROTECTED]> wrote:
> >
> > > Hi everyone,
> > > I'm using Kafka 0.7.2
> > >
> > > My firewall forcely close the tcp session
> > > when it's no transmission btw two end points for one hour.
> > >
> > > When a producer in that network dosen't send any message to Kafka
> broker
> > > for one hour,
> > > it makes problem.
> > >
> > > I can't touch that firewall configuration.
> > >
> > > Alternatively, I can send dummy logs to kafka, and ignore them in
> > consumer.
> > > However, I don't think it to be a good way to handle architecural
> issues
> > in
> > > bussiness logic.
> > >
> > > Does anyone have good idea?
> > > (Actually, I hope Kafka supports heartbeat feature.)
> > >
> > >
> > > Thanks,
> > >
> > > Cnulwoo Choi
> > >
> >
>

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB