Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - what will happen when HDFS restarts but with some dead nodes


+
Nan Zhu 2013-01-30, 03:04
+
Chen He 2013-01-30, 03:50
+
Harsh J 2013-01-30, 16:27
+
Chen He 2013-01-30, 16:36
Copy link to this message
-
Re: what will happen when HDFS restarts but with some dead nodes
Harsh J 2013-01-30, 16:39
Yes, if there are missing blocks (i.e. all replicas lost), and the
block availability threshold is set to its default of 0.999f (99.9%
availability required), then NN will not come out of safemode
automatically. You can control this behavior by configuring
dfs.namenode.safemode.threshold.

On Wed, Jan 30, 2013 at 10:06 PM, Chen He <[EMAIL PROTECTED]> wrote:
> Hi Harsh
>
> I have a question. How namenode gets out of safemode in condition of data
> blocks lost, only administrator? Accordin to my experiences, the NN (0.21)
> stayed in safemode about several days before I manually turn safemode off.
> There were 2 blocks lost.
>
> Chen
>
>
> On Wed, Jan 30, 2013 at 10:27 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>>
>> NN does recalculate new replication work to do due to unavailable
>> replicas ("under-replication") when it starts and receives all block
>> reports, but executes this only after out of safemode. When in
>> safemode, across the HDFS services, no mutations are allowed.
>>
>> On Wed, Jan 30, 2013 at 8:34 AM, Nan Zhu <[EMAIL PROTECTED]> wrote:
>> > Hi, all
>> >
>> > I'm wondering if HDFS is stopped, and some of the machines of the
>> > cluster
>> > are moved,  some of the block replication are definitely lost for moving
>> > machines
>> >
>> > when I restart the system, will the namenode recalculate the data
>> > distribution?
>> >
>> > Best,
>> >
>> > --
>> > Nan Zhu
>> > School of Computer Science,
>> > McGill University
>> >
>> >
>>
>>
>>
>> --
>> Harsh J
>
>

--
Harsh J