-Re: HDFS write failures!
Rahul Bhattacharjee 2013-05-18, 11:54
Thanks Ravi for the pointer. I will look into the pointed out source.
On Fri, May 17, 2013 at 11:44 PM, Ravi Prakash <[EMAIL PROTECTED]> wrote:
> I couldn't find any code that would relay this failure to the NN. The
> relevant code is in DFSOutputStream:DataStreamer:processDatanodeError()
> For trunk:
> For 0.20:
> I believe the assumption here is that the NN should independently discover
> the failed node. Also, some failures might not be worthy of being reported
> because the DN is expected to recover from them.
> *From:* Rahul Bhattacharjee <[EMAIL PROTECTED]>
> *To:* "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> *Sent:* Friday, May 17, 2013 12:10 PM
> *Subject:* HDFS write failures!
> I was going through some documents about HDFS write pattern. It looks like
> the write pipeline is closed when a error is encountered and the faulty
> node is taken out of the pipeline and the write continues.Few other
> intermediate steps are to move the un-acked packets from ack queue to the
> data queue.
> My question is , is this faulty data node is reported to the NN and
> whether NN would continue to use it as a valid DN while serving other write
> requests in future or will it make it as faulty ?