Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> HDFSsink failover error


+
Rahul Ravindran 2013-01-14, 21:42
Copy link to this message
-
Re: HDFSsink failover error
I assume that's only part of your config as it's missing a source; if you
get rid of the sink processor, can you write to each hdfs sink
individually? (comment one out at a time)

- Connor
On Mon, Jan 14, 2013 at 1:42 PM, Rahul Ravindran <[EMAIL PROTECTED]> wrote:

> Hi,
>    I am attempting to setup an HDFS sink such that when a namenode
> failover occurs (active namenode is brought down and the standby namenode
> switches to active), the failover sink would send events to the new active
> namenode. I see an error about WRITE not supported in standby state..Does
> this not count as a failure for the failover sink?
> Thanks,
> ~Rahul.
>
> My config is as follows:
>
> agent1.sinks.hdfs-sink1.channel = ch1
> agent1.sinks.hdfs-sink1.type = hdfs
> agent1.sinks.hdfs-sink1.hdfs.path > hdfs://ip-10-4-71-187.ec2.internal/user/br/shim/eventstream/event/host101/
> agent1.sinks.hdfs-sink1.hdfs.filePrefix = event
> agent1.sinks.hdfs-sink1.hdfs.writeFormat = Text
> agent1.sinks.hdfs-sink1.hdfs.rollInterval = 120
> agent1.sinks.hdfs-sink1.hdfs.rollCount = 0
> agent1.sinks.hdfs-sink1.hdfs.rollSize = 0
> agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
> agent1.sinks.hdfs-sink1.hdfs.batchSize = 1000
> agent1.sinks.hdfs-sink1.hdfs.txnEventSize = 1000
>
> agent1.sinks.hdfs-sink1-back.channel = ch1
> agent1.sinks.hdfs-sink1-back.type = hdfs
> agent1.sinks.hdfs-sink1-back.hdfs.path > hdfs://ip-10-110-69-240.ec2.internal/user/br/shim/eventstream/event/host101/
> agent1.sinks.hdfs-sink1-back.hdfs.filePrefix = event
> agent1.sinks.hdfs-sink1-back.hdfs.writeFormat = Text
> agent1.sinks.hdfs-sink1-back.hdfs.rollInterval = 120
> agent1.sinks.hdfs-sink1-back.hdfs.rollCount = 0
> agent1.sinks.hdfs-sink1-back.hdfs.rollSize = 0
> agent1.sinks.hdfs-sink1-back.hdfs.fileType = DataStream
> agent1.sinks.hdfs-sink1-back.hdfs.batchSize = 1000
> agent1.sinks.hdfs-sink1-back.hdfs.txnEventSize = 1000
>
> agent1.sinkgroups.failoverGroup1.sinks = hdfs-sink1 hdfs-sink1-back
> agent1.sinkgroups.failoverGroup1.processor.type = failover
> #higher number in priority is higher priority
> agent1.sinkgroups.failoverGroup1.processor.priority.hdfs-sink1 = 10
> agent1.sinkgroups.failoverGroup1.processor.priority.hdfs-sink1-back = 5
> #failover if failure detected for 10 seconds
> agent1.sinkgroups.failoverGroup1.processor.maxpenalty = 10000
>
>
> agent1.sinkgroups = failoverGroup1
>
> 14 Jan 2013 21:37:28,819 INFO  [hdfs-hdfs-sink2-call-runner-6]
> (org.apache.flume.sink.hdfs.BucketWriter.doOpen:208)  - Creating
> hdfs://ip-10-4-71-187.ec2.internal/....
> 14 Jan 2013 21:37:28,834 WARN
>  [SinkRunner-PollingRunner-FailoverSinkProcessor]
> (org.apache.flume.sink.hdfs.HDFSEventSink.process:456)  - HDFS IO error
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
> Operation category WRITE is not supported in state standby
>         at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1379)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:762)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:1688)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1669)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:409)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:205)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44068)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
+
Rahul Ravindran 2013-01-14, 22:46
+
Connor Woodson 2013-01-14, 23:05
+
Rahul Ravindran 2013-01-14, 23:13
+
Connor Woodson 2013-01-14, 23:51
+
Rahul Ravindran 2013-01-15, 00:00
+
Connor Woodson 2013-01-15, 00:25
+
Brock Noland 2013-01-15, 00:30
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB