Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> speeding up replay log


Copy link to this message
-
Re: speeding up replay log
If fast replay doesn't help then you don't have enough RAM. I'd suggest you
use the new dual checkpoint feature. Note the dual and backup checkpoint
configs here:

http://flume.apache.org/FlumeUserGuide.html#file-channel
http://issues.apache.org/jira/browse/FLUME-1516

Brock

On Thu, Aug 8, 2013 at 2:48 PM, Edwin Chiu <[EMAIL PROTECTED]> wrote:

> Hi there!
>
> I'm using flume-ng 1.3.1 (Hortonworks latest production stable version as
> of now) on centos 6 with jdk 1.6.
>
> I'm wondering how to speed up the replay of logs after changing file
> channel parameters in flume.conf -- capacity and transactionCapacity.
>
> it takes hours for the node to catch up and able to receive and send
> events again.
>
> use-fast-replay = true with a ridiculous amount of max memory doesn't
> speed things up.
>
> Any recommendations to avoid the down time?
>
> thanks!
>
> Ed
>

--
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB