Hi,

In out current data ingestion system, producers are resilient in the sense
that if data cannot be reliably published (e.g., network is down), it is
spilled onto local disk.
A separate process runs asynchronously and attempts to publish spilled
data.  I am curious to hear what other people do in this case.
Is there a plan to have something similar integrated into kafka?  (AFAIK,
current implementation gives up after a configurable number of retries.)

Thanks,

stan

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB