Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Changing capacity configuration of File channel throws IllegalStateException


Copy link to this message
-
Re: Changing capacity configuration of File channel throws IllegalStateException
What is the channel size you are trying to use?  
Thanks,
Hari
On Friday, September 13, 2013 at 11:15 AM, Deepesh Malviya wrote:

> Hari,
>
> I am using Flume 1.4.0.
>
> Jeff,
>
> Regarding this fixed size checkpoint file, what should be the ideal size of this checkpoint file or in other words, what to consider while defining checkpoint file?
>
> Thanks,
> Deepesh
>
> On Friday, September 13, 2013, Hari Shreedharan wrote:
> > Also, which version of Flume are you running. Looks like you are hitting https://issues.apache.org/jira/browse/FLUME-1918 as well due to an unsupported channel size in a previous version. This was fixed in Flume 1.4.0
> >
> >
> > Hari
> >
> >
> > Thanks,
> > Hari
> >
> >
> > On Friday, September 13, 2013 at 8:31 AM, Jeff Lord wrote:
> >
> > > Deepesh,
> > >
> > > The FileChannel uses a fixed size checkpoint file so it is not possible to set it to unlimited size (the checkpoint file is mmap-ed to a fixed size buffer). To change the capacity of the channel, use the following procedure:
> > >
> > > Shutdown the agent.
> > > Delete all files in the file channel's checkpoint directory. (not the data directories. Also you might want to move them out, rather than delete to be safe)
> > > Change your configuration to increase the capacity of the channel.
> > > Restart the agent.
> > >
> > > Hope this helps.
> > >
> > > -Jeff
> > >
> > >
> > > On Fri, Sep 6, 2013 at 3:29 AM, Deepesh Malviya <[EMAIL PROTECTED]> wrote:
> > > > Hi,
> > > >
> > > > When I am trying to increase the configuration of capacity of the File channel from default value, it is resulting in following exception. What could be the issue?
> > > >
> > > > 06 Sep 2013 10:27:01,086 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.SinkRunner$PollingRunner.run:160)  - Unable to deliver event. Exception follows.
> > > > java.lang.IllegalStateException: Channel closed [channel=flumeChannel]. Due to java.lang.NegativeArraySizeException: null
> > > > at org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:352)
> > > > at org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:122)
> > > > at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:344)
> > > > at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > > at java.lang.Thread.run(Thread.java:679)
> > > > Caused by: java.lang.NegativeArraySizeException
> > > > at org.apache.flume.channel.file.EventQueueBackingStoreFile.allocate(EventQueueBackingStoreFile.java:366)
> > > > at org.apache.flume.channel.file.EventQueueBackingStoreFile.<init>(EventQueueBackingStoreFile.java:87)
> > > > at org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>(EventQueueBackingStoreFileV3.java:49)
> > > > at org.apache.flume.channel.file.EventQueueBackingStoreFactory.get(EventQueueBackingStoreFactory.java:70)
> > > > at org.apache.flume.channel.file.Log.replay(Log.java:412)
> > > > at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:302)
> > > > at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> > > > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > > at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> > > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> > > > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> > > > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> > > > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> > > > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > > ... 1 more
> > > >
> > > > --
> > > > _Deepesh
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB