Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> HDFSsink failover error


Copy link to this message
-
Re: HDFSsink failover error
So you are able to write normally to the back-up HDFS servers? And that
error you got was when you were trying to write to the normal server? Was
it supposed to be an error (it looks like it's due to how your Hadoop is
set up)?

The log lines you pasted make it look like there was a problem with your
hdfs-sink1 (like I said above, maybe your Hadoop cluster is set up wrong);
what should have happened is that the event was then written to the backup
server. Below the stack trace there should probably have been another WARN
statement saying "Sink hdfs-sink1 failed and has been sent to the failover
list". And if hdfs-sink1-back then was unable to write, you would see a
thrown EventDeliveryException in your log.

If there isn't anything else in the log, and the event wasn't written to
the backup server, then that would be a bug.

- Connor
On Mon, Jan 14, 2013 at 2:46 PM, Rahul Ravindran <[EMAIL PROTECTED]> wrote:

> Here is the full config. I swapped the priorities on the sink processor
> after performing the namenode failiver and the writes were successful to
> the newly active namenode.
>
> agent1.channels.ch1.type = FILE
> agent1.channels.ch1.checkpointDir = /flume_runtime/checkpoint
> agent1.channels.ch1.dataDirs = /flume_runtime/data
>
>
> agent1.channels.ch2.type = FILE
> agent1.channels.ch2.checkpointDir = /flume_runtime/checkpoint2
> agent1.channels.ch2.dataDirs = /flume_runtime/data2
>
>
>
> # Define an Avro source called avro-source1 on agent1 and tell it
>
> # to bind to 0.0.0.0:41414. Connect it to channel ch1.
>
> agent1.sources.avro-source1.channels = ch1
> agent1.sources.avro-source1.type = avro
> agent1.sources.avro-source1.bind = 0.0.0.0
> agent1.sources.avro-source1.port = 4545
>
>
>
> agent1.sources.avro-source2.channels = ch2
> agent1.sources.avro-source2.type = avro
> agent1.sources.avro-source2.bind = 0.0.0.0
> agent1.sources.avro-source2.port = 4546
>
>
> agent1.sinks.hdfs-sink1.channel = ch1
> agent1.sinks.hdfs-sink1.type = hdfs
> agent1.sinks.hdfs-sink1.hdfs.path > hdfs://ip-10-4-71-187.ec2.internal/user/br/shim/eventstream/event/host101/
> agent1.sinks.hdfs-sink1.hdfs.filePrefix = event
> agent1.sinks.hdfs-sink1.hdfs.writeFormat = Text
> agent1.sinks.hdfs-sink1.hdfs.rollInterval = 120
> agent1.sinks.hdfs-sink1.hdfs.rollCount = 0
> agent1.sinks.hdfs-sink1.hdfs.rollSize = 0
> agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
> agent1.sinks.hdfs-sink1.hdfs.batchSize = 1000
> agent1.sinks.hdfs-sink1.hdfs.txnEventSize = 1000
>
> agent1.sinks.hdfs-sink2.channel = ch2
> agent1.sinks.hdfs-sink2.type = hdfs
> agent1.sinks.hdfs-sink2.hdfs.path > hdfs://ip-10-4-71-187.ec2.internal/user/br/shim/eventstream/event/host102/
> agent1.sinks.hdfs-sink2.hdfs.filePrefix = event
> agent1.sinks.hdfs-sink2.hdfs.writeFormat = Text
> agent1.sinks.hdfs-sink2.hdfs.rollInterval = 120
> agent1.sinks.hdfs-sink2.hdfs.rollCount = 0
> agent1.sinks.hdfs-sink2.hdfs.rollSize = 0
> agent1.sinks.hdfs-sink2.hdfs.fileType = DataStream
> agent1.sinks.hdfs-sink2.hdfs.batchSize = 1000
> agent1.sinks.hdfs-sink2.hdfs.txnEventSize = 1000
>
>
> agent1.sinks.hdfs-sink1-back.channel = ch1
> agent1.sinks.hdfs-sink1-back.type = hdfs
> agent1.sinks.hdfs-sink1-back.hdfs.path > hdfs://ip-10-110-69-240.ec2.internal/user/br/shim/eventstream/event/host101/
> agent1.sinks.hdfs-sink1-back.hdfs.filePrefix = event
> agent1.sinks.hdfs-sink1-back.hdfs.writeFormat = Text
> agent1.sinks.hdfs-sink1-back.hdfs.rollInterval = 120
> agent1.sinks.hdfs-sink1-back.hdfs.rollCount = 0
> agent1.sinks.hdfs-sink1-back.hdfs.rollSize = 0
> agent1.sinks.hdfs-sink1-back.hdfs.fileType = DataStream
> agent1.sinks.hdfs-sink1-back.hdfs.batchSize = 1000
> agent1.sinks.hdfs-sink1-back.hdfs.txnEventSize = 1000
>
> agent1.sinks.hdfs-sink2-back.channel = ch2
> agent1.sinks.hdfs-sink2-back.type = hdfs
> agent1.sinks.hdfs-sink2-back.hdfs.path > hdfs://ip-10-110-69-240.ec2.internal/user/br/shim/eventstream/event/host102/
> agent1.sinks.hdfs-sink2-back.hdfs.filePrefix = event
> agent1.sinks.hdfs-sink2-back.hdfs.writeFormat = Text