You should consider how your system will act if there is a downstream
failure. Even a capacity of 500 is extremely (orders of magnitude) too
small in my opinion.
Consider setting a channel capacity equal to (average events per second
ingested * # of seconds downtime you want to tolerate). So if you are
ingesting 1000 events/sec and you want to tolerate 1 hour of downtime
without dropping events, you would want a channel capacity of 1000 * (60 *
60) = 3600000. Don't forget that the channel is a buffer that is intended
to smooth out the latencies inherent in a complex network of storage
systems. Even HDFS and HBase have latency hiccups sometimes, so try to
avoid running close to your buffer capacity.
On Thu, Oct 11, 2012 at 11:37 AM, Harish Mandala <[EMAIL PROTECTED]>wrote:
> I've noticed in general that capacity = 100*transactionCapacity (or
> 10*transactionCapacity) works well for me.
> On Thu, Oct 11, 2012 at 2:34 PM, Cochran, David M (Contractor) <
> [EMAIL PROTECTED]> wrote:
>> Trying that now... set to 500 for each channel... we'll see how it
>> For some reason 'channel capacity' didn't equate with the error msg. In
>> my figuring the part about the sinks not being able to keep up led me in
>> another direction.... maybe I wasn't holding my head just right :)
>> Thanks for the quick response!
>> -----Original Message-----
>> From: Brock Noland [mailto:[EMAIL PROTECTED]]
>> Sent: Thursday, October 11, 2012 1:14 PM
>> To: [EMAIL PROTECTED]
>> Subject: Re: Errors
>> Basically the channel is filling up. Have you increased the capacity of
>> the channel?
>> On Thu, Oct 11, 2012 at 1:08 PM, Cochran, David M (Contractor)
>> <[EMAIL PROTECTED]> wrote:
>> > This error insists on making an appearance at least daily on my test
>> > systems.
>> > Unable to put batch on required channel:
>> > org.apache.flume.channel.MemoryChannel@555c07d8
>> > Caused by: org.apache.flume.ChannelException: Space for commit to
>> > queue couldn't be acquired Sinks are likely not keeping up with
>> > sources, or the buffer size is too tight
>> > Changing the batch-size and batchSize around from default to 100's or
>> > a 1000 doesn't seem to help
>> > increased the JAVA_OPTS="-Xms256m -Xmx512m" still no change
>> > This shows up very intermittently, but daily, the logs being tailed
>> > are not very big, and are not growing very quickly, actually very slow
>> > in the grand scheme of things.
>> > Am I missing something to help balance things out here?
>> > Thanks
>> > Dave
>> Apache MRUnit - Unit testing MapReduce -