Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> HDFSsink failover error


+
Rahul Ravindran 2013-01-14, 21:42
+
Connor Woodson 2013-01-14, 22:28
+
Rahul Ravindran 2013-01-14, 22:46
+
Connor Woodson 2013-01-14, 23:05
Copy link to this message
-
Re: HDFSsink failover error
The writes to the backup were successful when I attempted to write to it directly but not via the failover sink processor. I did not see the warning that you mentioned about "Sink hdfs-sink1failed". 

The full log trace is below:

14 Jan 2013 22:48:24,727 INFO  [hdfs-hdfs-sink2-call-runner-1] (org.apache.flume.sink.hdfs.BucketWriter.doOpen:208)  - Creating hdfs://ip-10-4-71-187.ec2.internal/user/br/shim/eventstream/event/host102//event.1358203448551.tmp
14 Jan 2013 22:48:24,739 WARN  [SinkRunner-PollingRunner-FailoverSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:456)  - HDFS IO error
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1379)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:762)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:1688)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1669)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:409)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:205)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44068)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)

        at org.apache.hadoop.ipc.Client.call(Client.java:1160)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
        at $Proxy11.create(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
        at $Proxy11.create(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:192)
        at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1298)
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1317)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1215)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1173)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:272)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:261)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:78)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:805)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:685)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:674)
       at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:60)
        at org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:209)
        at org.apache.flume.sink.hdfs.BucketWriter.access$000(BucketWriter.java:53)
        at org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:172)
        at org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:170)
        at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
        at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:170)
        at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:364)
        at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
        at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:679)
________________________________
 From: Connor Woodson <[EMAIL PROTECTED]>
To: Rahul Ravindran <[EMAIL PROTECTED]>
Cc: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
Sent: Monday, January 14, 2013 3:05 PM
Subject: Re: HDFSsink failover error
 

So you are able to write normally to the back-up HDFS servers? And that error you got was when you were trying to write to the normal server? Was it supposed to be an error (it looks like it's due to how your Hadoop is set up)?

The log lines you pasted make it look like there was a problem with your hdfs-sink1 (like I said above, maybe your Hadoop cluster is set up wrong); what should have happened is that the event was then written to the backup server. Below the stack trace there should probably have been another WARN statement saying "Sink h
+
Connor Woodson 2013-01-14, 23:51
+
Rahul Ravindran 2013-01-15, 00:00
+
Connor Woodson 2013-01-15, 00:25
+
Brock Noland 2013-01-15, 00:30