-Re: Using Stunnel to encrypt/authenticate Kafka producers and consumers...
Wouldn't it make more sense to do something like an encrypted tunnel
between your core routers in each facility? LIke IPSEC on a GRE tunnel or
This concept would need adjustment for those in the cloud but when you want
to build an encrypted tunnel between a bunch of hosts and a bunch of hosts,
it doesn't seem like a giant pile of stunnels is the best method. Something
networking level would make more sense I think.
On Mon, Apr 22, 2013 at 12:46 PM, Matt Wise <[EMAIL PROTECTED]> wrote:
> Unfortunately 'stunneling everything' is not really possible. Stunnel acts
> like a proxy service ... in the sense that the Stunnel client (on your log
> producer, or log consumer) has to be explicitly configured to connect to an
> exact endpoint (ie, kafka1.mydomain.com:1234) -- or multiple endpoints,
> that are randomly selected by stunnel.
> In a few cases you can use Stunnel as an SSL offloader for certain
> protocols, but thats done on the server-side... ie, in front of a Postgres
> server, so that Stunnel can do the encryption rather than Postgres itself.
> It would make a bit of a difference I think if our log producers were the
> only ones that needed to be able to talk to 'all' of the Kafka nodes. We
> could do something where we ship logs via an encrypted TCP session to some
> group of Kafka "log funnel" machines, where they can reach the Kafka
> servers directly and dump the log data. Maybe.
> I'm still digging around, but I'm really surprised this hasn't been a
> larger topic of discussion. If Kafka natively allowed a single connection
> through a single server to reach all of the other servers in the farm, it
> would be far easier to secure and encrypt the communication. ElasticSearch
> and RabbitMQ are good examples of this model.
> On Apr 22, 2013, at 12:21 PM, Scott Clasen <[EMAIL PROTECTED]> wrote:
> > I think you are right, even if you did put an ELB in front of kafka, it
> > would only be used for getting the initial broker list afaik. Producers
> > consumers need to be able to talk to each broker directly, and also
> > consumers need to be able to talk to zookeeper as well to store offsets.
> > Probably have to stunnel all the things. Id be interested in hearing how
> > it works out. IMO this would be a great thing to have in kafka-contrib.
> > On Mon, Apr 22, 2013 at 11:31 AM, Matt Wise <[EMAIL PROTECTED]> wrote:
> >> Hi there... we're currently looking into using Kafka as a pipeline for
> >> passing around log messages. We like its use of Zookeeper for
> >> (as we already make heavy use of Zookeeper at Nextdoor), but I'm running
> >> into one big problem. Everything we do is a) in the cloud, b) secure,
> >> c) cross-region/datacenter/cloud-provider.
> >> We make use of SSL for both encryption and authentication of most of our
> >> services. My understanding is that Kafka 0.7.x producers and consumers
> >> connect to Zookeeper to retrieve a list of the current Kafka servers,
> >> then make direct TCP connections to the individual servers that they
> >> to to publish or subscribe to a stream. In 0.8.x thats changed, so now
> >> clients can connect to a single Kafka server and get a list of these
> >> servers via an API?
> >> What I'm wondering is whether we can actually put an ELB in front of
> >> of our Kafka servers, throw stunnel on them, and give our producers and
> >> clients a single endpoint to connect to (through the ELB) rather than
> >> having them connect directly to the individual Kafka servers. This would
> >> provide us both encryption of the data during transport, as well as
> >> authentication of the producers and subscribers. Lastly, if it works, it
> >> would provide these features without impacting our ability to use
> >> kafka producer/consumers that people have written.
> >> My concern is that the Kafka clients (producers or consumers?) would
> >> connect once through the ELB, then get the list of servers via the API,
*Jonathan Creasy* | Sr. Ops Engineer
e: [EMAIL PROTECTED] | t: 314.580.8909