Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> HDFS write failures!

Rahul Bhattacharjee 2013-05-17, 17:10
Copy link to this message
Re: HDFS write failures!

I couldn't find any code that would relay this failure to the NN. The relevant code is in DFSOutputStream:DataStreamer:processDatanodeError()

For trunk: https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
For 0.20: http://javasourcecode.org/html/open-source/hadoop/hadoop-

I believe the assumption here is that the NN should independently discover the failed node. Also, some failures might not be worthy of being reported because the DN is expected to recover from them.

 From: Rahul Bhattacharjee <[EMAIL PROTECTED]>
Sent: Friday, May 17, 2013 12:10 PM
Subject: HDFS write failures!
I was going through some documents about HDFS write pattern. It looks like the write pipeline is closed when a error is encountered and the faulty node isĀ  taken out of the pipeline and the write continues.Few other intermediate steps are to move the un-acked packets from ack queue to the data queue.
My question is , is this faulty data node is reported to the NN and whether NN would continue to use it as a valid DN while serving other write requests in future or will it make it as faulty ?
Rahul Bhattacharjee 2013-05-18, 11:54