Rahul Ravindran 2012-11-16, 21:45
Alexander Alten-Lorenz 2012-11-17, 08:56
I'm guessing that was just a slip with ROUND_ROBIN_BACKOFF => RANDOM.
Rahul: Both RANDOM and ROUND_ROBIN now have backoff semantics in them.
So RANDOM_BACKOFF became RANDOM and ROUND_ROBIN_BACKOFF became
ROUND_ROBIN. It as processors fail, they will be temporarily blacklisted
and the random/round_robin semantics will be carried out on the
remaining sinks as one would expect.
On 11/17/2012 05:56 PM, Alexander Alten-Lorenz wrote:
> Yes, thats the correct configuration. We had such an parameter in Flume, but removed them later as we clean up the code. RANDOM is working like ROUND_ROBIN_BACKOFF.
> Please use our wiki for the newest guides:
> and the actual user guide:
> On Nov 16, 2012, at 10:45 PM, Rahul Ravindran <[EMAIL PROTECTED]> wrote:
>> Hi ,
>> The documentation at http://archive.cloudera.com/cdh/3/flume-ng/FlumeUserGuide.html indicates that there is a Round_robin_backoff but this threw an error. It looks like there is a constant with this type defined in code, but it is not used anywhere. The code seemed to indicate that the below should achieve a backoff for Round Robin though there is not documentation about the processor.backoff parameter. Is the below the right way to perform a round robin with backoff?
>> agent1.sinkgroups = group1
>> agent1.sinkgroups.group1.sinks = avroSink1 avroSink2
>> agent1.sinkgroups.group1.processor.type = load_balance
>> agent1.sinkgroups.group1.processor.selector = ROUND_ROBIN
>> agent1.sinkgroups.group1.processor.backoff = true
> Alexander Alten-Lorenz
> German Hadoop LinkedIn Group: http://goo.gl/N8pCF