Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> : write-timeout   value tuning


Copy link to this message
-
Re: : write-timeout value tuning
There is no harm in setting write-timeout to something like 30 seconds. In
fact it probably makes sense to increase the default to 30 seconds.
On Mon, Apr 8, 2013 at 1:38 PM, Madhu Gmail <[EMAIL PROTECTED]>wrote:

>
>  Hello,****
>
> ** **
>
> I am getting below ERROR in flume agent(Acting as a collector)  which is
> receiving  log events from another  flume agent.****
>
> ** **
>
> I have also copied my flume-conf.properties  at the end of this mail.****
>
> Any idea how to tune  write-timeout  value.   ****
>
> ** **
>
> ** **
>
> 2013-04-05 13:17:33,197 ERROR org.apache.flume.SinkRunner: Unable to
> deliver event. Exception follows.****
>
> org.apache.flume.ChannelException: Failed to obtain lock for writing to
> the log. Try increasing the log write timeout value. [channel=fc]****
>
>                 at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:434)
> ****
>
>                 at
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
> ****
>
>                 at
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:91)
> ****
>
>                 at
> org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:189)****
>
>                 at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> ****
>
>                 at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)****
>
>                 at java.lang.Thread.run(Thread.java:662)****
>
> 2013-04-05 13:17:33,427 INFO
> org.apache.flume.channel.file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1365169979081, queueSize: 0,
> queueHead: 362421****
>
> 2013-04-05 13:17:34,233 INFO org.apache.flume.channel.file.LogFileV3:
> Updating log-14.meta currentPosition = 3818784, logWriteOrderID > 1365169979081****
>
> 2013-04-05 13:17:34,294 INFO org.apache.flume.channel.file.Log: Updated
> checkpoint for file: /opt/sponge/flume/file-channel/dataDirs/log-14
> position: 3818784 logWriteOrderID: 1365169979081****
>
> 2013-04-05 13:17:34,294 DEBUG org.apache.flume.channel.file.Log: Rolling
> back 1365169950299****
>
> 2013-04-05 13:17:34,296 ERROR org.apache.flume.source.AvroSource: Avro
> source S1: Unable to process event batch. Exception follows.****
>
> org.apache.flume.ChannelException: Unable to put batch on required
> channel: FileChannel fc { dataDirs:
> [/opt/sponge/flume/file-channel/dataDirs] }****
>
>                 at
> org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:200)
> ****
>
>                 at
> org.apache.flume.source.AvroSource.appendBatch(AvroSource.java:237)****
>
>                 at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown
> Source)****
>
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> ****
>
>                 at java.lang.reflect.Method.invoke(Method.java:597)****
>
>                 at
> org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:88)
> ****
>
>                 at
> org.apache.avro.ipc.Responder.respond(Responder.java:149)****
>
>                 at
> org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188)
> ****
>
>                 at
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
> ****
>
>                 at
> org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173)
> ****
>
>                 at
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> ****
>
>                 at
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)
> ****
>
>                 at
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302)***
> *
>
>                 at
> org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321)

Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB