this sounds like an interesting idea. But at the actual state of Kafka
this would mean, that I would have to extend the Kafka Producer and
classes on my own to support that kind of message/file transfer, wouldn't
This is a bit too much effort for me at the moment.
But of course it would be nice, if Kafka Producers or Consumers would
zero-copy file transfer natively.

At the moment I'm more thinking about sending a message to the consumer
with an URL of the huge binary file, and let the consumer fetch the file
from that
URL directly. By that we would use Kafka only for sending a notification,
a new file exists at the source. The real file transfer would bypass
the Kafka queue.

Andreas Maier

Am 05.09.13 12:28 schrieb "Magnus Edenhill" unter <[EMAIL PROTECTED]>:

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB