Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> hdfs question when replacing dead node...


+
Andy Sautins 2009-07-23, 22:00
+
Aaron Kimball 2009-07-24, 02:21
+
Raghu Angadi 2009-07-24, 03:48
Copy link to this message
-
RE: hdfs question when replacing dead node...

  Thanks for the help.  It's quite possible I didn't quite happen as it appeared.  I will try to reproduce.

  Thanks again.
  
  Andy

-----Original Message-----
From: Raghu Angadi [mailto:[EMAIL PROTECTED]]
Sent: Thursday, July 23, 2009 9:48 PM
To: [EMAIL PROTECTED]
Subject: Re: hdfs question when replacing dead node...
The block reports are every hour by default. They should not cause any
false negatives for replication on NN.

Andy's observation is not expected AFIK.

Andy, please check if you can repeat it.. if it happens again, please
file a jira and you can attach your relevant log files there. We have
not seen such an issue while dealing with dead nodes or rebalancer.

Raghu.

Aaron Kimball wrote:
> How fast did you re-run fsck after re-joining the node? fsck returns data
> based on the latest block reports from datanodes -- these are scheduled to
> run (I think) every 15 minutes, so the NameNode's state on block replication
> may be as much as 15 minutes out of date.
>
> - Aaron
>
> On Thu, Jul 23, 2009 at 3:00 PM, Andy Sautins
> <[EMAIL PROTECTED]>wrote:
>
>>   I recently had to replace a node on a hadoop 0.20.0 4-node cluster and I
>> can't quite explain what happened.  If anyone has any insight I'd appreciate
>> it.
>>
>>   When the node failed ( drive failure ) running the command 'hadoop fsck
>> /' correctly showed the data nodes to now be 3 instead of 4 and showed the
>> under replicated blocks to be replicated.  I assume that once the node was
>> determined to be dead the blocks on the dead node were not considered in the
>> replication factor and caused hdfs to replicate to the available nodes to
>> meet the configured replication factor of 3.  All is good.  What I couldn't
>> explain is that after re-building and re-starting the failed node I started
>> the balancer ( bin/start-balancer.sh ) and re-ran 'hadoop fsck /'.  The
>> number of nodes showed that the 4th node was now back in the cluster.  What
>> struck me as strange is a large number of blocks ( > 2k ) were shown as
>> under replicated.  The under replicated blocks were eventually re-replicated
>> and all the data seems correct.
>>
>>   Can someone explain why re-adding a node that had died why the
>> replication factor would go from 3 to 2?  Is there something with the
>> balancer.sh script that would show fsck that the blocks are under
>> replicated?
>>
>>   Note that I'm still getting the process for replacing failed nodes down
>> so it's possible that I was looking at things wrong for a bit.
>>
>>    Any insight would be greatly appreciated.
>>
>>    Thanks
>>
>>    Andy
>>
>