Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Strange error on Datanodes


Copy link to this message
-
Re: Strange error on Datanodes
Jitendra Yadav 2013-12-03, 15:19
Use below parameter in mapred-site.xml

<property>
<name>mapred.task.timeout</name>
<value>1800000</value>
</property>

Thanks

On Tue, Dec 3, 2013 at 8:16 PM, Siddharth Tiwari
<[EMAIL PROTECTED]>wrote:

> Thanks Jeet
>
> can you suggest me the parameter which controls the timeout value ?
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>
> ------------------------------
> Date: Tue, 3 Dec 2013 15:38:50 +0530
>
> Subject: Re: Strange error on Datanodes
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>
>
> Sorry for the incomplete mail.
>
> Instead of one issue I think you may have two issues going on. I'm also adding CDH mailing list for more inputs on the same.
>
> *1.*
> 2013-12-02 13:11:36,441 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-1854340821-10.238.9.151-1385733655875:blk_-2927699636194035560_63092 java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
>
> <> This error could be possible in a scenario where your DN process having long time GC push, Increasing the timeout value may resolve this issue. Or your client connect could be disconnected abnormal.
>
> *2. *
>
> 2013-12-02 13:12:06,586 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: brtlvlts0088co:50010:DataXceiver error processing WRITE_BLOCK operation  src: /10.238.10.43:54040 dest: /10.238.10.43:50010 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> <> Try to increase the dfs.datanode.max.xcievers conf value in the datanode hdfs-site.conf
>
>
> Regards
>
> Jitendra
>
>
>
>
> On Tue, Dec 3, 2013 at 3:17 PM, Jitendra Yadav <[EMAIL PROTECTED]
> > wrote:
>
> I did some analysis on the provided logs and confs.
>
> Instead of one issue i believe you may have two issue going on.
>
> 1.
>
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
>
>
> 2.
>
> 2013-12-02 13:12:06,586 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: brtlvlts0088co:50010:DataXceiver error processing WRITE_BLOCK operation  src: /10.238.10.43:54040 dest: /10.238.10.43:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
>
>
>
>
>
>
> On Mon, Dec 2, 2013 at 9:30 PM, Siddharth Tiwari <
> [EMAIL PROTECTED]> wrote:
>
>
> Hi Jeet
> I am using CDH 4 , but I have manually installed NN and JT with HA not
> using cdh manager. I am attaching NN logs here, I sent a mail just before
> this for other files. This is frustrating , why is it happening.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>
> ------------------------------
> Date: Mon, 2 Dec 2013 21:24:43 +0530
>
> Subject: Re: Strange error on Datanodes
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
>
> Which hadoop destro you are using?, It would be good if you share the logs
> from data node on which the data block(blk_-2927699636194035560_63092)
> exist and from name nodes also.
>
> Regards
> Jitendra
>
>
> On Mon, Dec 2, 2013 at 9:13 PM, Siddharth Tiwari <
> [EMAIL PROTECTED]> wrote:
>
> Hi Jeet
>
> I have a cluster of size 25, 4 Admin nodes and 21 datanodes.
> 2 NN 2 JT 3 Zookeepers and 3 QJNs
>
> if you could help me in understanding what kind of logs you want I will
> provide it to you. Do you need hdfs-site.xml, core-site.xml and
> mapred-site.xmls ?