Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> SpoolDir marks item as completed, when sink fails


+
Tzur Turkenitz 2013-01-31, 22:35
+
Mike Percy 2013-02-01, 09:56
+
Tzur Turkenitz 2013-02-01, 15:44
Copy link to this message
-
Re: SpoolDir marks item as completed, when sink fails
Tzur,
The source and sink are decoupled completely. The source will fill the
channel until there is no more work or the channel is full. So the data is
sitting buffered in the channel until the sink removes it.

Hope that explains things. Let me know if anything is unclear.

Regards,
Mike

On Friday, February 1, 2013, Tzur Turkenitz wrote:

> Mike, so when the data is committed to the channel, and the channel is of
> type "File" then when the agent will be restarted the data will continue to
> flow onto the sink?
> And if only 20% of the data passed onto the sink before it crashed then a
> "Replay" will be done to resend the whole data?
>
> Just trying to grasp the basics....
>
>
>
>
> On Fri, Feb 1, 2013 at 4:56 AM, Mike Percy <[EMAIL PROTECTED]<javascript:_e({}, 'cvml', '[EMAIL PROTECTED]');>
> > wrote:
>
>> Tzur, that is expected, because the data is committed by the source onto
>> the channel. Sources and sinks are decoupled, they only interact via the
>> channel, which buffers the data and serves to mitigate impedance mismatches.
>>
>>
>>
>> On Thu, Jan 31, 2013 at 2:35 PM, Tzur Turkenitz <[EMAIL PROTECTED]<javascript:_e({}, 'cvml', '[EMAIL PROTECTED]');>
>> > wrote:
>>
>>> Hello all,
>>>
>>> I am running HDP 1.2 and Flume 1.3. I have a flume setup which includes a
>>> (1) -  Load Balancer that uses SpoolDir adapter and sends events to Avro
>>> sinks
>>> (2) - Agents which consume the data using an avro source and writing to
>>> hdfs.
>>>
>>> During testing I noticed that there's a dissonance between the Load
>>> Balancer and the Consumers...
>>> When a Load Balancer process a file it marks it as COMPLETED, even if
>>> the consumer has crashed while writing to HDFS.
>>>
>>> A preferred behavior would be the Load Balancer to wait until the
>>> consumer commits its transaction and reports it as successful before the
>>> file is marked as COMPLETED. This does not allow me to verify which files
>>> has been loaded successfully if an agent has crashed and recovery is in
>>> process.
>>>
>>> Have I miss-configured my Agents or this is actually the desired
>>> behavior?
>>>
>>>
>>> Kind Regards,
>>> Tzur
>>>
>>
>>
>
>
> --
> Regards,
> Tzur Turkenitz
> Vision.BI
> http://www.vision.bi/
>
> "*Facts are stubborn things, but statistics are more pliable*"
> -Mark Twain
>
+
Mike Percy 2013-02-05, 08:31
+
Tzur Turkenitz 2013-02-05, 15:25
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB