Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Broker to consumer compression


Copy link to this message
-
Re: Broker to consumer compression
Kafka already supports end-to-end compression which means data
transfer between brokers and consumers is compressed. There are two
supported compression codecs - GZIP and Snappy. The latter is lighter
on CPU consumption. See this blog post for comparison -
http://geekmantra.wordpress.com/2013/03/28/compression-in-kafka-gzip-or-snappy/

Thanks,
Neha

On Fri, Apr 12, 2013 at 10:56 AM, Pablo Barrera González
<[EMAIL PROTECTED]> wrote:
> Hi
>
> Is it possible to enable compression between the broker and the consumer?
>
> We are thinking in develop this feature in kafka 0.7 but first I would
> like to check if there is something out there.
>
> Our escenario is like this:
>
> - the producer is a CPU bounded machine, so we want to keep the CPU
> consumption as low as possible, so we can't enable compression here
> - the consumers can fetch data from the same data center (no
> compression needed) or from a remote data center
> - intersite bandwidth is limited so compression would be interesting
>
> Our approach is to add compress the connection at kafka level between
> broker and consumer, inside kafka, so the final user can read plain
> data.
>
> Regards
>
> Pablo

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB