Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Question regarding datanode been wiped by hadoop

felix gao 2011-04-12, 14:46
Ayon Sinha 2011-04-12, 14:52
felix gao 2011-04-12, 15:02
felix gao 2011-04-12, 15:05
Ayon Sinha 2011-04-12, 15:11
Marcos Ortiz 2011-04-12, 16:13
Harsh J 2011-04-12, 16:17
felix gao 2011-04-12, 16:30
Matthew Foley 2011-04-12, 17:09
Copy link to this message
Re: Question regarding datanode been wiped by hadoop
One thing to consider.. If the node was down for a day all of its blocks could’ve been replicated to other datanodes.
When machine is brought back , these blocks become overreplicated and NameNode decides to delete them.
You should check the logs of both DataNode and Namenode to see if it could be the case.

On 4/12/11 7:46 AM, "felix gao" <[EMAIL PROTECTED]> wrote:

What reason/condition would cause a datanode’s blocks to be removed?   Our cluster had a one of its datanodes crash because of bad RAM.   After the system was upgraded and the datanode/tasktracker brought online the next day we noticed the amount of space utilized was minimal and the cluster was rebalancing blocks to the datanode.   It would seem the prior blocks were removed.   Was this because the datanode was declared dead?   What is the criteria for a namenode to decide (Assuming its the namenode) when a datanode should remove prior blocks?