In out current data ingestion system, producers are resilient in the sense
that if data cannot be reliably published (e.g., network is down), it is
spilled onto local disk.
A separate process runs asynchronously and attempts to publish spilled
data. I am curious to hear what other people do in this case.
Is there a plan to have something similar integrated into kafka? (AFAIK,
current implementation gives up after a configurable number of retries.)