Hi,

Thanks for the link, Hari!

It looks like the only way to avoid having Flume write data to be sent to
Sink on disk first is by using
https://issues.apache.org/jira/browse/FLUME-1227 , once it's committed.

I have a few related questions:

* How/when does Flume delete data from FileChannel?
* Does it delete individual "records" as soon as a "record" is sent out?
* Does it periodically purge batches of data?
* Is there a notion of TTL, like in Kafka, where data is not removed
explicitly by its consumer, but is deleted by Kafka Broker after some TTL?

* What happens with data that could not be sent?
* I know there is a retry and backoff mechanism.  But does Flume at some
point give up on trying to send some (old) piece of data out because it's
tried > N times or for > M seconds?

Thanks,
Otis
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Wed, Feb 26, 2014 at 2:15 PM, Hari Shreedharan <[EMAIL PROTECTED]
 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB