Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Why used space of flie channel buffer directory increase?


Copy link to this message
-
Re: Why used space of flie channel buffer directory increase?
Hey,

what says debug? Do you can gather logs and attach them?

- Alex

On Mar 19, 2013, at 5:27 PM, "Kenison, Matt" <[EMAIL PROTECTED]> wrote:

> Check the JMX counter first, to make sure you really are not sending new events. If not, is it your checkpoint directory or data directory that is increasing in size?
>
>
> From: Zhiwen Sun <[EMAIL PROTECTED]>
> Reply-To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Date: Tue, 19 Mar 2013 01:19:19 -0700
> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Subject: Why used space of flie channel buffer directory increase?
>
> hi all:
>
> I test flume-ng in my local machine. The data flow is :
>
>   tail -F file | nc 127.0.0.01 4444 > flume agent > hdfs
>
> My configuration file is here :
>
>> a1.sources = r1
>> a1.channels = c2
>>
>> a1.sources.r1.type = netcat
>> a1.sources.r1.bind = 192.168.201.197
>> a1.sources.r1.port = 44444
>> a1.sources.r1.max-line-length = 1000000
>>
>> a1.sinks.k1.type = logger
>>
>> a1.channels.c1.type = memory
>> a1.channels.c1.capacity = 10000
>> a1.channels.c1.transactionCapacity = 10000
>>
>> a1.channels.c2.type = file
>> a1.sources.r1.channels = c2
>>
>> a1.sources.r1.interceptors = i1
>> a1.sources.r1.interceptors.i1.type = timestamp
>>
>> a1.sinks = k2
>> a1.sinks.k2.type = hdfs
>> a1.sinks.k2.channel = c2  
>> a1.sinks.k2.hdfs.path = hdfs://127.0.0.1:9000/flume/events/%Y-%m-%d
>> a1.sinks.k2.hdfs.writeFormat = Text
>> a1.sinks.k2.hdfs.rollInterval = 10
>> a1.sinks.k2.hdfs.rollSize = 10000000
>> a1.sinks.k2.hdfs.rollCount = 0
>>
>> a1.sinks.k2.hdfs.filePrefix = app
>> a1.sinks.k2.hdfs.fileType = DataStream
>
>
>
> it seems that events were collected correctly.
>
> But there is a problem boring me: Used space of file channel (~/.flume) has always increased, even there is no new event.
>
> Is my configuration wrong or other problem?
>
> thanks.
>
>
> Best regards.
>
> Zhiwen Sun
>

--
Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB