Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Decommisioning runs for ever


Copy link to this message
-
Re: Decommisioning runs for ever
Did you change the background bandwidth from 10mbs to something higher?
Worst case is that you can kill the DN and wait 10 mins for the cluster to realize its down and then rebalance.
(Its ugly, but it works.)

On Aug 6, 2012, at 7:59 PM, "Chandra Mohan, Ananda Vel Murugan" <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I tried decommissioning a node in my Hadoop cluster. I am running Apache Hadoop 1.0.2 and ours is a four node cluster. I also have HBase installed in my cluster. I have shut down region server in this node.
>
> For decommissioning, I did the following steps
>
>
>  *   Added the following XML in hdfs-site.xml
>
> <property>
>
> <name>dfs.hosts.exclude</name>
>
> <value>/full/path/of/host/exclude/file</value>
>
> </property>
>
>
> *         Ran "<HADOOP_HOME>/bin/hadoop dfsadmin -refreshNodes"
>
>
>
> But node decommissioning is running for the last 6 hrs. I don't know when it will get over. I am in need of this node for other activities.
>
>
>
> From HDFS health status JSP:
>
> Cluster Summary
> 338 files and directories, 200 blocks = 538 total. Heap Size is 16.62 MB / 888.94 MB (1%)
> Configured Capacity
>
> :
>
> 1.35 TB
>
> DFS Used
>
> :
>
> 759.57 MB
>
> Non DFS Used
>
> :
>
> 179.36 GB
>
> DFS Remaining
>
> :
>
> 1.17 TB
>
> DFS Used%
>
> :
>
> 0.05 %
>
> DFS Remaining%
>
> :
>
> 86.92 %
>
> Live Nodes
>
> :
>
> 4
>
> Dead Nodes
>
> :
>
> 0
>
> Decommissioning Nodes
>
> :
>
> 1
>
> Number of Under-Replicated Blocks
>
> :
>
> 129
>
>
>
>
> Please share if you have any idea. Thanks a lot.
>
>
>
> Regards,
>
> Anand.C
>
>