Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Re: DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010


+
Dhanasekaran Anbalagan 2013-03-08, 09:42
+
Pablo Musa 2013-03-08, 17:28
+
Abdelrahman Shettia 2013-03-09, 00:57
Copy link to this message
-
Re: DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010
I am having some GC pauses (70 secs) but I don't think this could cause
480 secs
timeout. And its even more weird when it happens from one datanode to
ITSELF.

 > Socket is ready for receiving, but client closed abnormally. so you
generally got this error.

What would abnormally be in this case?

 > xcievers : 4096 is enough, and I don't think you pasted a full stack
exception.

Follows.

Thanks very much for the help,
Pablo Musa

2013-03-12 09:41:52,779 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/172.17.2.18:50010, dest: /172.17.2.18:43364, bytes: 66564, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1549283955_26, offset:
66393088, srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid:
BP-43236042-172.17.2.10-1362490844340:blk_7228654423351524558_25176577,
duration: 24309480
2013-03-12 09:41:52,810 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/172.17.2.18:50010, dest: /172.17.2.18:43364, bytes: 66564, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1549283955_26, offset:
66458624, srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid:
BP-43236042-172.17.2.10-1362490844340:blk_7228654423351524558_25176577,
duration: 24791908

...

2013-03-12 11:57:54,176 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/172.17.2.18:50010, dest: /172.17.2.18:45037, bytes: 66564, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1549283955_26, offset: 2755072,
srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid:
BP-43236042-172.17.2.10-1362490844340:blk_7228654423351524558_25176577,
duration: 26533296

...

2013-03-12 12:12:56,524 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-43236042-172.17.2.10-1362490844340:blk_6121120387190865802_12522001
2013-03-12 12:12:56,844 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-43236042-172.17.2.10-1362490844340:blk_7798078179913116741_9709757
2013-03-12 12:12:57,412 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: exception:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010
remote=/172.17.2.18:45063]
         at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-12 12:12:57,412 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/172.17.2.18:50010, dest: /172.17.2.18:45063, bytes: 594432, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_1549283955_26, offset: 2886144,
srvID: DS-229334310-172.17.2.18-50010-1328651636364, blockid:
BP-43236042-172.17.2.10-1362490844340:blk_7228654423351524558_25176577,
duration: 480311786486
2013-03-12 12:12:57,412 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(172.17.2.18,
storageID=DS-229334310-172.17.2.18-50010-1328651636364, infoPort=50075,
ipcPort=50020,
storageInfo=lv=-40;cid=CID-26cd999e-460a-4dbc-b940-9250a76930a8;nsid=276058127;c=1362491004838):Got
exception while serving
BP-43236042-172.17.2.10-1362490844340:blk_7228654423351524558_25176577
to /172.17.2.18:45063
java.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010
remote=/172.17.2.18:45063]
         at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:673)
         at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:344)
         at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
         at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
         at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
         at java.lang.Thread.run(Thread.java:722)
2013-03-12 12:12:57,412 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
PSLBHDN002:50010:DataXceiver error processing READ_BLOCK operation src:
/172.17.2.18:45063 dest: /172.17.2.18:50010
java.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/172.17.2.18:50010
remote=/172.17.2.18:45063]
         at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:247)
         at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:166)
         at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:214)
         at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:510)
         at
org.apache.hadoop.hdfs.server.data
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB