Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> FileChannel error


Copy link to this message
-
Re: FileChannel error
How large is /local/flume/file-channel/flume-log-sink-dev/data/log-884? Would
you be willing to share the file with me (off list) so I could take a look
at the corruption?

Brock
On Fri, Mar 29, 2013 at 1:02 PM, Andrew Jones <andrew+[EMAIL PROTECTED]
> wrote:

> Hi,
>
> I restarted my flume process, and I am now getting the following error in
> my logs:
>
> 29 Mar 2013 17:56:13,756 ERROR [lifecycleSupervisor-1-0]
> (org.apache.flume.channel.file.LogFile$SequentialReader.next:493)  -
> Encountered non op-record at 1357908629 3e in
> /var/run/flume/file-channel/flume-log-sink-dev/data/log-883
> 29 Mar 2013 17:56:13,760 ERROR [lifecycleSupervisor-1-0]
> (org.apache.flume.channel.file.Log.replay:410)  - Failed to initialize
> Log on [channel=channel]
> java.io.IOException: Unable to read next Transaction from log file
> /local/flume/file-channel/flume-log-sink-dev/data/log-884 at offset
> 720893818
>        at
> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:502)
>        at
> org.apache.flume.channel.file.ReplayHandler.next(ReplayHandler.java:364)
>        at
> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:264)
>        at org.apache.flume.channel.file.Log.doReplay(Log.java:435)
>        at org.apache.flume.channel.file.Log.replay(Log.java:382)
>        at
> org.apache.flume.channel.file.FileChannel.start(FileChannel.java:303)
>        at
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:236)
>        at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>        at
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
>        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>        at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
>        at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>        at java.lang.Thread.run(Thread.java:679)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: While
> parsing a protocol message, the input ended unexpectedly in the middle
> of a field.  This could mean either than the input has been truncated
> or that an embedded message misreported its own length.
>        at
> com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:49)
>        at
> com.google.protobuf.CodedInputStream.readRawVarint32(CodedInputStream.java:402)
>        at
> com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:280)
>        at
> com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:760)
>        at
> com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:288)
>        at
> com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:752)
>        at
> org.apache.flume.channel.file.proto.ProtosFactory$TransactionEventFooter.parseDelimitedFrom(ProtosFactory.java:4559)
>        at
> org.apache.flume.channel.file.TransactionEventRecord.fromByteArray(TransactionEventRecord.java:203)
>        at
> org.apache.flume.channel.file.LogFileV3$SequentialReader.doNext(LogFileV3.java:344)
>        at
> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:498)
>        ... 14 more
>
> So it seems to be have corrupted the log somehow, and is unable to
> recover. How can I get past this, either by removing the offending
> transaction, or making it recover? I don't want to lose all the events in
> the log.
>
> Using Flume 1.3.1, Avro source and HDFS sink.
>
> Thanks,
> Andrew
--
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB