Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Why used space of flie channel buffer directory increase?


Copy link to this message
-
Why used space of flie channel buffer directory increase?
hi all:

I test flume-ng in my local machine. The data flow is :

  tail -F file | nc 127.0.0.01 4444 > flume agent > hdfs

My configuration file is here :

a1.sources = r1
> a1.channels = c2
>
> a1.sources.r1.type = netcat
> a1.sources.r1.bind = 192.168.201.197
> a1.sources.r1.port = 44444
> a1.sources.r1.max-line-length = 1000000
>
> a1.sinks.k1.type = logger
>
> a1.channels.c1.type = memory
> a1.channels.c1.capacity = 10000
> a1.channels.c1.transactionCapacity = 10000
>
> a1.channels.c2.type = file
> a1.sources.r1.channels = c2
>
> a1.sources.r1.interceptors = i1
> a1.sources.r1.interceptors.i1.type = timestamp
>
> a1.sinks = k2
> a1.sinks.k2.type = hdfs
> a1.sinks.k2.channel = c2
> a1.sinks.k2.hdfs.path = hdfs://127.0.0.1:9000/flume/events/%Y-%m-%d
> a1.sinks.k2.hdfs.writeFormat = Text
> a1.sinks.k2.hdfs.rollInterval = 10
> a1.sinks.k2.hdfs.rollSize = 10000000
> a1.sinks.k2.hdfs.rollCount = 0
>
> a1.sinks.k2.hdfs.filePrefix = app
> a1.sinks.k2.hdfs.fileType = DataStream
>

it seems that events were collected correctly.

But there is a problem boring me: Used space of file channel (~/.flume) has
always increased, even there is no new event.

Is my configuration wrong or other problem?

thanks.
Best regards.

Zhiwen Sun
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB