In out current data ingestion system, producers are resilient in the sense
that if data cannot be reliably published (e.g., network is down), it is
spilled onto local disk.
A separate process runs asynchronously and attempts to publish spilled
data. I am curious to hear what other people do in this case.
Is there a plan to have something similar integrated into kafka? (AFAIK,
current implementation gives up after a configurable number of retries.)
Corbin Hoenes 2013-01-15, 20:13
Stan Rosenberg 2013-01-15, 22:32
Jay Kreps 2013-01-15, 20:19
Stan Rosenberg 2013-01-15, 22:30
Fernando O. 2015-01-28, 18:42