Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume, mail # user - Changing capacity configuration of File channel throws IllegalStateException


+
Deepesh Malviya 2013-09-06, 10:29
+
Jeff Lord 2013-09-13, 15:31
+
Hari Shreedharan 2013-09-13, 16:28
+
Deepesh Malviya 2013-09-13, 18:15
+
Hari Shreedharan 2013-09-13, 18:32
Copy link to this message
-
Re: Changing capacity configuration of File channel throws IllegalStateException
Deepesh Malviya 2013-09-14, 14:10
Basically, we are Migrating from scribe to flume. In scribe, we had around
45 different categories for which we were receiving data. In scribe, we had
configured hdfs as primary store & file as secondary store. The scribe uses
common file store for all the categories so we didn't had to configure size
of each store but only the size of each file it would create. In the
failure scenario, we had seen scribe storing around 2+ gigs for most of
the category. So we are just thinking, how should flume channel be
configured to handle such cases where the hdfs is not available.

Thanks,
Deepesh

On Saturday, September 14, 2013, Hari Shreedharan wrote:

>  What is the channel size you are trying to use?
>
>
> Thanks,
> Hari
>
> On Friday, September 13, 2013 at 11:15 AM, Deepesh Malviya wrote:
>
> Hari,
>
> I am using Flume 1.4.0.
>
> Jeff,
>
> Regarding this fixed size checkpoint file, what should be the ideal size
> of this checkpoint file or in other words, what to consider while defining
> checkpoint file?
>
> Thanks,
> Deepesh
>
> On Friday, September 13, 2013, Hari Shreedharan wrote:
>
> Also, which version of Flume are you running. Looks like you are hitting
> https://issues.apache.org/jira/browse/FLUME-1918 as well due to an
> unsupported channel size in a previous version. This was fixed in Flume
> 1.4.0
>
>
> Hari
>
>
> Thanks,
> Hari
>
> On Friday, September 13, 2013 at 8:31 AM, Jeff Lord wrote:
>
> Deepesh,
>
> The FileChannel uses a fixed size checkpoint file so it is not possible to
> set it to unlimited size (the checkpoint file is mmap-ed to a fixed size
> buffer). To change the capacity of the channel, use the following procedure:
>
> Shutdown the agent.
> Delete all files in the file channel's checkpoint directory. (not the data
> directories. Also you might want to move them out, rather than delete to be
> safe)
> Change your configuration to increase the capacity of the channel.
> Restart the agent.
>
> Hope this helps.
>
> -Jeff
>
>
> On Fri, Sep 6, 2013 at 3:29 AM, Deepesh Malviya <[EMAIL PROTECTED]>wrote:
>
> Hi,
>
> When I am trying to increase the configuration of capacity of the File
> channel from default value, it is resulting in following exception. What
> could be the issue?
>
> 06 Sep 2013 10:27:01,086 ERROR
> [SinkRunner-PollingRunner-DefaultSinkProcessor]
> (org.apache.flume.SinkRunner$PollingRunner.run:160)  - Unable to deliver
> event. Exception follows.
> java.lang.IllegalStateException: Channel closed [channel=flumeChannel].
> Due to java.lang.NegativeArraySizeException: null
>  at
> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:352)
> at
> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:122)
>  at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:344)
> at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>  at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> at java.lang.Thread.run(Thread.java:679)
> Caused by: java.lang.NegativeArraySizeException
> at
> org.apache.flume.channel.file.EventQueueBackingStoreFile.allocate(EventQueueBackingStoreFile.java:366)
>  at
> org.apache.flume.channel.file.EventQueueBackingStoreFile.<init>(EventQueueBackingStoreFile.java:87)
> at
> org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>(EventQueueBackingStoreFileV3.java:49)
>  at
> org.apache.flume.channel.file.EventQueueBackingStoreFactory.get(EventQueueBackingStoreFactory.java:70)
> at org.apache.flume.channel.file.Log.replay(Log.java:412)
>  at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:302)
> at
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
>  at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.j
>
>

--
_Deepesh
+
Jeff Lord 2013-09-16, 23:52