Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> Exceptions after reloading configuration


+
Yatchmenoff, Sam 2013-01-16, 20:41
+
Brock Noland 2013-01-16, 21:46
Copy link to this message
-
Re: Exceptions after reloading configuration
Switching configuration on a running node is pretty buggy, I would
recommend just restarting flume, as while sometimes it will work there
are issues like components not getting properly shut down even when
removed from the config.

On 01/17/2013 05:41 AM, Yatchmenoff, Sam wrote:
> I have Flume 1.2.0 running in a production system with 3 collectors
> fed by ~30 agents running on our application servers. If I make a
> change to the node configuration on the collectors, when the
> configuration is reloaded automatically, the collectors will
> occasionally fail and repeatedly report the following exception:
>
> 2013-01-16 20:31:22,353 ERROR flume.SinkRunner: Unable to deliver
> event. Exception follows.
> org.apache.flume.EventDeliveryException: Failed to process transaction
> at org.apache.flume.sink.RollingFileSink.process(RollingFileSink.java:218)
> at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> at java.lang.Thread.run(Thread.java:679)
> Caused by: java.io.IOException: Stream Closed
> at java.io.FileOutputStream.writeBytes(Native Method)
> at java.io.FileOutputStream.write(FileOutputStream.java:297)
> at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
> at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
> at
> org.apache.flume.serialization.BodyTextEventSerializer.write(BodyTextEventSerializer.java:71)
> at org.apache.flume.sink.RollingFileSink.process(RollingFileSink.java:195)
> ... 3 more
>
> After about a dozen of those, I will start seeing this exception:
>
> 2013-01-16 20:32:27,374 ERROR flume.SinkRunner: Unable to deliver
> event. Exception follows.
> org.apache.flume.EventDeliveryException: Unable to rotate file
> /mnt/rawlog/1358365369665-49 while delivering event
> at org.apache.flume.sink.RollingFileSink.process(RollingFileSink.java:155)
> at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> at java.lang.Thread.run(Thread.java:679)
> Caused by: java.io.IOException: Stream Closed
> at java.io.FileOutputStream.writeBytes(Native Method)
> at java.io.FileOutputStream.write(FileOutputStream.java:297)
> at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> at org.apache.flume.sink.RollingFileSink.process(RollingFileSink.java:149)
> ... 3 more
>
>
> Here is configuration for that agent:
>
> agent1.sources = source1 source2
> agent1.sinks = sink1
> agent1.channels = channel1
>
> # Describe/configure source1
> agent1.sources.source1.type = avro
> agent1.sources.source1.bind = 0.0.0.0
> agent1.sources.source1.port = 35853
>
> agent1.sources.source2.type = netcat
> agent1.sources.source2.bind = localhost
> agent1.sources.source2.port = 35854
> agent1.sources.source2.max-line-length = 524288
>
> # Describe sink1
> agent1.sinks.sink1.type = FILE_ROLL
> agent1.sinks.sink1.sink.directory = /mnt/rawlog
> agent1.sinks.sink1.sink.rollInterval = 60
>
> # Use a channel which buffers events in memory
> agent1.channels.channel1.type = file
> agent1.channels.channel1.checkpointDir =
> /mnt/flume-ng/file-channel1/checkpoint
> agent1.channels.channel1.dataDirs = /mnt/flume-ng/file-channel1/data
> agent1.channels.channel1.capacity = 100000
>
> # Bind the source and sink to the channel
> agent1.sources.source1.channels = channel1
> agent1.sources.source2.channels = channel1
> agent1.sinks.sink1.channel = channel1
>
> Any ideas about what's causing this exception would be greatly
> appreciated.

+
Brock Noland 2013-01-17, 02:18
+
Hari Shreedharan 2013-01-17, 02:28
+
Juhani Connolly 2013-01-17, 02:36
+
Brock Noland 2013-01-17, 02:53
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB