Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010


Copy link to this message
-
Re: DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010
I am also having this issue and tried a lot of solutions, but could not
solve it.

]# ulimit -n ** running as root and hdfs (datanode user)
32768

]# cat /proc/sys/fs/file-nr
2080    0    8047008

]# lsof | wc -l
5157

Sometimes this issue happens from one node to the same node :(

I also think this issue is messing with my regionservers which are
crashing all day long!!

Thanks,
Pablo

On 03/08/2013 06:42 AM, Dhanasekaran Anbalagan wrote:
> Hi Varun
>
> I believe is not ulimit issue.
>
>
> /etc/security/limits.conf
> # End of file
> *               -      nofile          1000000
> *               -      nproc           1000000
>
>
> please guide me Guys, I want fix this. share your thoughts DataXceiver
> error.
>
> Did I learn something today? If not, I wasted it.
>
>
> On Fri, Mar 8, 2013 at 3:50 AM, varun kumar <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     Hi Dhana,
>
>     Increase the ulimit for all the datanodes.
>
>     If you are starting the service using hadoop increase the ulimit
>     value for hadoop user.
>
>     Do the  changes in the following file.
>
>     */etc/security/limits.conf*
>
>     Example:-
>     *hadoop          soft    nofile          35000*
>     *hadoop          hard    nofile          35000*
>
>     Regards,
>     Varun Kumar.P
>
>     On Fri, Mar 8, 2013 at 1:15 PM, Dhanasekaran Anbalagan
>     <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
>
>         Hi Guys
>
>         I am frequently getting is error in my Data nodes.
>
>         Please guide what is the exact problem this.
>
>         dvcliftonhera138:50010:DataXceiver error processing WRITE_BLOCK operation src: /172.16.30.138:50373  <http://172.16.30.138:50373>  dest: /172.16.30.138:50010  <http://172.16.30.138:50010>
>
>
>
>         java.net.SocketTimeoutException: 70000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/172.16.30.138:34280  <http://172.16.30.138:34280>  remote=/172.16.30.140:50010  <http://172.16.30.140:50010>]
>
>
>
>
>
>         at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:154)
>         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:127)
>
>
>
>
>
>         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:115)
>         at java.io.FilterInputStream.read(FilterInputStream.java:66)
>         at java.io.FilterInputStream.read(FilterInputStream.java:66)
>         at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:160)
>
>
>
>
>
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:405)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
>
>
>
>
>
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189)
>         at java.lang.Thread.run(Thread.java:662)
>
>
>         dvcliftonhera138:50010:DataXceiver error processing WRITE_BLOCK operation src: /172.16.30.138:50531  <http://172.16.30.138:50531>  dest: /172.16.30.138:50010  <http://172.16.30.138:50010>
>
>
>
>         java.io.EOFException: while trying to read 65563 bytes
>
>
>         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:408)
>         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:452)
>         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:511)
>
>
>
>
>
>         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:748)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:462)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB