Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> Kafka versus classic central HTTP(s) services for logs transmission


+
Jean Bic 2012-10-20, 20:27
Copy link to this message
-
Re: Kafka versus classic central HTTP(s) services for logs transmission
You could move the producer code to the "site" and expose that as a REST interface.

You can then benefit from the scale and consumer functionality that comes with Kafka without these issues you are bringing up.

On Oct 20, 2012, at 4:27 PM, Jean Bic <[EMAIL PROTECTED]> wrote:

> Hello,
>
> We have started to build a solution to gather logs from many machines
> located in various “sites” to a so-called “Consolidation server” which role
> is to persists the logs and generate alerts based on some criteria
> (patterns in logs, triggers on some values, etc).
>
>
> We are challenged by our future users to clarify why Kafka is for this need
> the best possible communication solution. They argue that it would be
> better to choose a more classic HTTP(S) based solution with producers
> calling REST services on a pool of Node.js servers behind a load-balancer.
>
>
> One of the main issue they see with Kafka is that  It requires connections
> from Consolidation Server to Kafka brokers and to Zookeeper daemons located
> in each “site”, versus connections from logs producers in all sites to the
> Consolidation servers.
> Here Kafka is seen as a burden for each site’s IT team requiring some
> firewall special setup, versus. no firewall setup with the service-based
> solution :
>
> 1.      Kafka requires for each site IT team to create firewall rules for
> accepting incoming connections for a “non standard” protocol from the
> “Collector server” site
>
> 2.      IT team must expose all Zookeeper and Broker machines/ports to the
> “Collector server” site
>
> 3.   Kafka has no built-in encryption for data, where as a classic services
> oriented solution can rely on HTTPS (reverse) proxies
>
> 4.      Kafka is not commonly known by IT people who do not know how to
> scale it: when should they add broker machines versus when should they add
> zookeeper machines?
>
> With the services-based solution, the IT teams of each site are free of
> scalability issues, only on “Consolidation server” site one has to add
> Node.js machine to scale up.
>
> I agree that these IT concerns can't be taken lightly.
>
> I need help from Kafka community to find rock solid assets for using Kafka
> over classic services-based solution.
>
> How would you “defend” Kafka against above “attacks”?
>
>
> Regards,
>
> Jean
+
Jean Bic 2012-10-21, 08:44
+
Jun Rao 2012-10-21, 21:44
+
Sybrandy, Casey 2012-10-22, 12:17
+
Neha Narkhede 2012-10-22, 17:49
+
Jean Bic 2012-10-22, 20:12
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB