I was going through the hadoop-consumer in the contrib folder. There is property that asks for the kafka server URI. This might sound silly but from looking at it, it seems to be only for a single kafka broker.
What we have multiple brokers, how do we implement the hadoop-consumer for it?
For one of our project, the data is coming in the binary proto format. Camus is based on Avro so what would be the best situation to handle to protobuf data files? On Tue, Jun 4, 2013 at 8:40 PM, Jun Rao <[EMAIL PROTECTED]> wrote: *Samir Madhavan *| Data Scientist | Flutura Business Solutions Pvt. Ltd | 4th Floor, 'Geetanjali', #693, 15th Cross, J.P Nagar 2nd Phase, Bangalore, India - 560078 | Mobile: +91 9886139631 | email: * [EMAIL PROTECTED]*| www.fluturasolutions.com |
It is probably possible to try to make the serde format pluggable. You can try to start that discussion on the camus mailing list.
Thanks, Neha On Jun 4, 2013 8:51 AM, "Samir Madhavan" < [EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation project and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext