I assume this for Kafka 0.7. One option is to use a VIP in front of the
brokers for load balancing.
On Thu, Aug 29, 2013 at 1:39 PM, Mark <[EMAIL PROTECTED]> wrote:
> We are thinking about using Kafka to collect events from our Rails
> application and I was hoping to get some input from the Kafka community.
> Currently the only gems available are:
> https://github.com/bpot/poseidon (Can't use since we are only running
> Now neither of these integrate with Zookeeper so we are missing quite a
> few features:
> - Auto discovery of brokers
> - Auto discover of partitions for consumers
> - … fill in the rest here, new to Kafka so don't know everything that is
> I was wondering what my best options are going forward with Kakfa? I think
> we have the following choices
> A) Instead of writing directly to Kafka from our application we can write
> our events/messages to some other source (Syslog, File, ?) and then have a
> separate Java process that reads these sources and writes to Kafka. This is
> a little annoying since we now have to worry about every machine also
> running the above separate process to write to Kafka.
> B) Work around the above limitations. Auto-discover of brokers is terrible
> since we don't foresee us adding/removing brokers that frequently. The lack
> of auto-discover of partitions is definitely a loss since we now have to
> know which broker/partition to read from at all times. Of course we can
> just write to Kafka using the above Gems and have our consumers written in
> another language.
> Any thoughts/opinions?
> - M