I am working on using Kafka for building a highly scalable system. As I understand and have seen, Kafka broker has a very impressive and scalable file handling mechanisms to provide guaranteed delivery. However in one of the scenarios, I am facing a different challenge.
The scenario is such that the message payload is buffered and guaranteed for delivery by an external system, wherein there is no compelling need for guaranteed delivery from Kafka, but there is a need to parallel process the message streams. This made me wonder, if there is some way in Kafka, wherein I can avoid creation of files and instead stream the messages in-memory as they come and still take advantage of Kafka message streams, avoiding the small overhead of file management (avoid some more disk level IOPS).
Would greatly appreciate community's response.
Thanks & Regards Pankaj Misra ________________________________ NOTE: This message may contain information that is confidential, proprietary, privileged or otherwise protected by law. The message is intended solely for the named addressee. If received in error, please destroy and notify the sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant and/or guarantee, that the integrity of this communication has been maintained nor that the communication is free of errors, virus, interception or interference.
For real time consumers, the overhead from the file system should be small since the requested data is likely in pagecache and we use zero-copy transfer.
Jun On Mon, Apr 22, 2013 at 11:07 PM, Pankaj Misra <[EMAIL PROTECTED]>wrote:
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation projects and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext