Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Take list full error after 1.3 upgrade


Copy link to this message
-
Take list full error after 1.3 upgrade
I have a 2-tier flume setup, with 4 agents feeding into 2 'collector' agents that write to HDFS.
 
One of the data flows is hung up after an upgrade and restart with the following error:

3:54:13.497 PM ERROR org.apache.flume.sink.hdfs.HDFSEventSink process failed
org.apache.flume.ChannelException: Take list for FileBackedTransaction, capacity 1000 full, consider committing more frequently, increasing capacity, or increasing thread count. [channel=fc_WebLogs]
at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:481)
at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:386)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:662)


3:54:13.498 PM ERROR org.apache.flume.SinkRunner Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: org.apache.flume.ChannelException: Take list for FileBackedTransaction, capacity 1000 full, consider committing more frequently, increasing capacity, or increasing thread count. [channel=fc_WebLogs]
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:461)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.flume.ChannelException: Take list for FileBackedTransaction, capacity 1000 full, consider committing more frequently, increasing capacity, or increasing thread count. [channel=fc_WebLogs]
at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:481)
at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:386)
... 3 more

The relevant part of the config is here:
tier2.sinks.hdfs_WebLogs.type = hdfs
tier2.sinks.hdfs_WebLogs.channel = fc_WebLogs
tier2.sinks.hdfs_WebLogs.hdfs.path = /flume/WebLogs/%Y%m%d/%H%M
tier2.sinks.hdfs_WebLogs.hdfs.round = true
tier2.sinks.hdfs_WebLogs.hdfs.roundValue = 15
tier2.sinks.hdfs_WebLogs.hdfs.roundUnit = minute
tier2.sinks.hdfs_WebLogs.hdfs.rollSize = 67108864
tier2.sinks.hdfs_WebLogs.hdfs.rollCount = 0
tier2.sinks.hdfs_WebLogs.hdfs.rollInterval = 30
tier2.sinks.hdfs_WebLogs.hdfs.batchSize = 10000
tier2.sinks.hdfs_WebLogs.hdfs.fileType = DataStream
tier2.sinks.hdfs_WebLogs.hdfs.writeFormat = Text

The channel is full, and the metrics page shows many take attempts with no successes. I've been in situations before where the channel is full (usually due to lease issues on HDFS files) but never had this issue, usually just an agent restart gets it going again.

Any help appreciated..

Thanks,
Paul Chavez
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB