The other issue with this model is that you're Kafka servers are available on the internet ... so anyone else can push data into them. At least, if you are running in a cross-cloud/cross-datacenter environment. We're struggling with the same design issues right now.

What we've hoped for is that in 0.8.x, Kafka would allow our producers to connect to ANY Kafka server and submit data, and have that data dynamically routed to the right servers. This way we could put the Kafka servers behind an ELB, throw up Stunnel clients on our producers and Stunnel servers on the Kafka machines. This would offload the SSL encryption and authentication to Stunnel, and allow Kafka to concentrate on what its good at.

For now though, that doesn't seem possible. It looks like we may end up going down the Flume route, because its easier to encrypt and authenticate the data streams through Flume. :/
On Apr 23, 2013, at 11:02 AM, Jason Rosenberg <[EMAIL PROTECTED]> wrote:

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB