Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> HDFS write failures!

Copy link to this message
Re: HDFS write failures!
Thanks Ravi for the pointer. I will look into the pointed out source.

On Fri, May 17, 2013 at 11:44 PM, Ravi Prakash <[EMAIL PROTECTED]> wrote:

> Hi,
> I couldn't find any code that would relay this failure to the NN. The
> relevant code is in DFSOutputStream:DataStreamer:processDatanodeError()
> For trunk:
> https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
> For 0.20:
> http://javasourcecode.org/html/open-source/hadoop/hadoop-
> I believe the assumption here is that the NN should independently discover
> the failed node. Also, some failures might not be worthy of being reported
> because the DN is expected to recover from them.
> Ravi.
>   ------------------------------
>  *From:* Rahul Bhattacharjee <[EMAIL PROTECTED]>
> *Sent:* Friday, May 17, 2013 12:10 PM
> *Subject:* HDFS write failures!
> Hi,
> I was going through some documents about HDFS write pattern. It looks like
> the write pipeline is closed when a error is encountered and the faulty
> node is  taken out of the pipeline and the write continues.Few other
> intermediate steps are to move the un-acked packets from ack queue to the
> data queue.
> My question is , is this faulty data node is reported to the NN and
> whether NN would continue to use it as a valid DN while serving other write
> requests in future or will it make it as faulty ?
> Thanks,
> Rahul