Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Re: Strange error on Datanodes


Copy link to this message
-
Re: Strange error on Datanodes
Jitendra Yadav 2013-12-03, 09:47
I did some analysis on the provided logs and confs.

Instead of one issue i believe you may have two issue going on.

1.

java.net.SocketTimeoutException: 65000 millis timeout while waiting
for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected
2.

2013-12-02 13:12:06,586 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
brtlvlts0088co:50010:DataXceiver error processing WRITE_BLOCK
operation  src: /10.238.10.43:54040 dest: /10.238.10.43:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)

On Mon, Dec 2, 2013 at 9:30 PM, Siddharth Tiwari
<[EMAIL PROTECTED]>wrote:

>
> Hi Jeet
> I am using CDH 4 , but I have manually installed NN and JT with HA not
> using cdh manager. I am attaching NN logs here, I sent a mail just before
> this for other files. This is frustrating , why is it happening.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>
> ------------------------------
> Date: Mon, 2 Dec 2013 21:24:43 +0530
>
> Subject: Re: Strange error on Datanodes
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
>
> Which hadoop destro you are using?, It would be good if you share the logs
> from data node on which the data block(blk_-2927699636194035560_63092)
> exist and from name nodes also.
>
> Regards
> Jitendra
>
>
> On Mon, Dec 2, 2013 at 9:13 PM, Siddharth Tiwari <
> [EMAIL PROTECTED]> wrote:
>
> Hi Jeet
>
> I have a cluster of size 25, 4 Admin nodes and 21 datanodes.
> 2 NN 2 JT 3 Zookeepers and 3 QJNs
>
> if you could help me in understanding what kind of logs you want I will
> provide it to you. Do you need hdfs-site.xml, core-site.xml and
> mapred-site.xmls ?
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>
> ------------------------------
> Date: Mon, 2 Dec 2013 21:09:03 +0530
> Subject: Re: Strange error on Datanodes
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
>
>
> Hi,
>
> Can you share some more logs from Data nodes? could you please also share
> the conf and cluster size?
>
> Regards
> Jitendra
>
>
> On Mon, Dec 2, 2013 at 8:49 PM, Siddharth Tiwari <
> [EMAIL PROTECTED]> wrote:
>
> Hi team
>
> I see following errors on datanodes. What is the reason for this and how
> can it will be resolved:-
>
> 2013-12-02 13:11:36,441 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-1854340821-10.238.9.151-1385733655875:blk_-2927699636194035560_63092
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.238.10.43:54040 remote=/10.238.10.43:50010]
> at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:156)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:117)
> at java.io.FilterInputStream.read(FilterInputStream.java:83)
> at java.io.FilterInputStream.read(FilterInputStream.java:83)
> at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:169)
> at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:114)
> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:694)
> 2013-12-02 13:12:06,572 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1