Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> FLume OG Choke Limit Not Working


Copy link to this message
-
FLume OG Choke Limit Not Working
Hello all,

I'm using flume OG (unable to upgrade to NG at this stage) and I am having trouble with the choke decorator.

I am aggregating the data flows from several logical nodes at a single 'aggregator' logical node. The data flows should be batched, zipped, choked and then sent on to another 'collector' logical node. I am using the following config to achieve this:

exec setChokeLimit aggregator.mydomain.com mychoke 150
exec config aggregator.mydomain.com 'collectorSource(35853)' 'batch(100, 1000) gzip choke("mychoke") agentBESink("collector.mydomain.com", 35853)'

The choke decorator should limit transfer to 150KB/sec, which equates to 1.2Mbit/sec. However I am regularly recording Flume traffic spikes of 5Mbit/sec and more.

Can anybody suggest what I might be doing wrong? Is it ok to chain the batch, gzip and choke decorators like this, or should they each be in a separate logical node?

Thanks,

James
________________________________
Information contained in this communication (including any attachments) is confidential and may be privileged or subject to copyright. If you have received this communication in error you are not authorised to use the information in any way and Optiver requests that you notify the sender by return email, destroy all copies and delete the information from your system. Optiver does not represent, warrant or guarantee that this communication is free from computer viruses or other defects or that the integrity of this communication has been maintained. Any views expressed in this communication are those of the individual sender. Optiver does not accept liability for any loss or damage caused directly or indirectly by this communication or its use.

Please consider the environment before printing this email.
+
James Stewart 2013-03-14, 02:35
+
Jeong-shik Jang 2013-03-14, 14:15
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB