Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Re: hbase coprocessor output error


Copy link to this message
-
Re: hbase coprocessor output error
Hello,

With only this information to go on, it appears your coprocessor took a
long time to respond (64.7 seconds) and the client disconnected
(ChannelClosedException on ensureWriteOpen means this). Did you set the
long timeout in your client side configuration? I'd guess not since the
client went away.

Anyway, a coprocessor should return a timely answer. 60 seconds is very
long.

On Monday, October 8, 2012, 夜半琴声 wrote:

> hi ,Andrew
>    when i worked with the coprocessor , i get the following excetion.
> 2012-10-08 18:06:43,543 WARN org.apache.hadoop.ipc.HBaseServer:
> (responseTooSlow):
> {"processingtimems":64709,"call":"execCoprocessor([B@8b739c,
> getAggregationModel(), rpc version=1, client version=29,
> methodsFingerPrint=54742778","client":"10.1.1.192:34126
> ","starttimems":1349690738830,"queuetimems":0,"class":"HRegionServer","responsesize":0,"method":"execCoprocessor"}
> 2012-10-08 18:06:43,599 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server
> Responder, call execCoprocessor([B@8b739c, getAggregationModel(), rpc
> version=1, client version=0, methodsFingerPrint=0), rpc version=1, client
> version=29, methodsFingerPrint=54742778 from 10.1.1.192:34126: output
> error
> 2012-10-08 18:06:43,600 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 20 on 60020 caught: java.nio.channels.ClosedChannelException
>         at
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126)
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>         at
> org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>         at
> org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>         at
> org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>         at
> org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>         at
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>         at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> and i've searched much on the internet.It's useless even i set the conf
> like the followings:
>
> <property>
>     <name>hbase.rpc.timeout</name>
>     <value>3600000</value>
>   </property>
> <property>
>     <name>hbase.regionserver.lease.period</name>
>     <value>3600000</value>
>   </property>
> <property>
>     <name>ipc.socket.timeout</name>
>     <value>3600000</value>
>   </property>
>
> I've read some pieces of source code ,
>         HBaseclient{  NetUtils.connect(this.socket, remoteId.getAddress(),
>               getSocketTimeout(conf));
> }
> I don't know whether is this caused  ..I'll appreciate your answer ,and
> Thank you!
>
>
>

--
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB