Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Will blocks of an unclosed file get lost when HDFS client (or the HDFS cluster) crashes?


+
Sean Bigdatafun 2011-03-13, 16:52
Copy link to this message
-
Re: Will blocks of an unclosed file get lost when HDFS client (or the HDFS cluster) crashes?
What do you mean by block?  An HDFS chunk?  Or a flushed write?

The answer depends a bit on which version of HDFS / Hadoop you are using.
 With the append branches, things happen a lot more like what you expect.
 Without that version, it is difficult to say what will happen.

Also, there are very few guarantees about what happens if the namenode
crashes.  There are some provisions for recovery, but none of them really
have any sort of transactional guarantees.  This means that there may be
some unspecified time before the writes that you have done are actually
persisted in a recoverable way.

On Sun, Mar 13, 2011 at 9:52 AM, Sean Bigdatafun
<[EMAIL PROTECTED]>wrote:

> Let's say an HDFS client starts writing a file A (which is 10 blocks
> long) and 5 blocks have been writen to datanodes.
>
> At this time, if the HDFS client crashes (apparently without a close
> op), will we see 5 valid blocks for file A?
>
> Similary, at this time if the HDFS cluster crashes, will we see 5
> valid blocks for file A?
>
> (I guess both answers are yes, but I'd have some confirmation :-)
> --
> --Sean
>
+
Sean Bigdatafun 2011-03-14, 05:09
+
Allen Wittenauer 2011-03-14, 16:21