Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - HDFS write failures!


+
Rahul Bhattacharjee 2013-05-17, 17:10
+
Ravi Prakash 2013-05-17, 18:14
Copy link to this message
-
Re: HDFS write failures!
Rahul Bhattacharjee 2013-05-18, 11:54
Thanks Ravi for the pointer. I will look into the pointed out source.

Rahul
On Fri, May 17, 2013 at 11:44 PM, Ravi Prakash <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I couldn't find any code that would relay this failure to the NN. The
> relevant code is in DFSOutputStream:DataStreamer:processDatanodeError()
>
> For trunk:
> https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
> For 0.20:
> http://javasourcecode.org/html/open-source/hadoop/hadoop-0.20.203.0/org/apache/hadoop/hdfs/DFSClient.DFSOutputStream.java.html
>
> I believe the assumption here is that the NN should independently discover
> the failed node. Also, some failures might not be worthy of being reported
> because the DN is expected to recover from them.
>
> Ravi.
>
>
>   ------------------------------
>  *From:* Rahul Bhattacharjee <[EMAIL PROTECTED]>
> *To:* "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> *Sent:* Friday, May 17, 2013 12:10 PM
> *Subject:* HDFS write failures!
>
> Hi,
>
> I was going through some documents about HDFS write pattern. It looks like
> the write pipeline is closed when a error is encountered and the faulty
> node is  taken out of the pipeline and the write continues.Few other
> intermediate steps are to move the un-acked packets from ack queue to the
> data queue.
>
> My question is , is this faulty data node is reported to the NN and
> whether NN would continue to use it as a valid DN while serving other write
> requests in future or will it make it as faulty ?
>
> Thanks,
> Rahul
>
>
>