Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume >> mail # user >> HDFSsink failover error


+
Rahul Ravindran 2013-01-14, 21:42
+
Connor Woodson 2013-01-14, 22:28
+
Rahul Ravindran 2013-01-14, 22:46
+
Connor Woodson 2013-01-14, 23:05
+
Rahul Ravindran 2013-01-14, 23:13
+
Connor Woodson 2013-01-14, 23:51
Copy link to this message
-
Re: HDFSsink failover error
Here is the entire log file after I restart flume
________________________________
 From: Connor Woodson <[EMAIL PROTECTED]>
To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>; Rahul Ravindran <[EMAIL PROTECTED]>
Sent: Monday, January 14, 2013 3:51 PM
Subject: Re: HDFSsink failover error
 

Can you look at the full log file and post the above section as well as 5-10 lines above/below it (you don't have to post that stack trace if you don't want)? Because that error, while it should definitely be logged, should be followed by some error lines giving context as to what is going on. And if that is the end of the log file then...well, that just shouldn't happen, as there are several different places that would have produced log messages as that exception propagates

- Connor

On Mon, Jan 14, 2013 at 3:13 PM, Rahul Ravindran <[EMAIL PROTECTED]> wrote:

The writes to the backup were successful when I attempted to write to it directly but not via the failover sink processor. I did not see the warning that you mentioned about "Sink hdfs-sink1failed". 
>
>
>The full log trace is below:
>
>
>14 Jan 2013 22:48:24,727 INFO  [hdfs-hdfs-sink2-call-runner-1] (org.apache.flume.sink.hdfs.BucketWriter.doOpen:208)  - Creating hdfs://ip-10-4-71-187.ec2.internal/user/br/shim/eventstream/event/host102//event.1358203448551.tmp
>14 Jan 2013 22:48:24,739 WARN  [SinkRunner-PollingRunner-FailoverSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:456)  - HDFS IO error
>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby
>        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1379)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:762)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:1688)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1669)
>        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:409)
>        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:205)
>        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44068)
>        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>
>
>        at org.apache.hadoop.ipc.Client.call(Client.java:1160)
>        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>        at $Proxy11.create(Unknown Source)
>        at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:616)
>        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>        at $Proxy11.create(Unknown Source)
>        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:192)
+
Connor Woodson 2013-01-15, 00:25
+
Brock Noland 2013-01-15, 00:30