Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: are we able to decommission multi nodes at one time?


Copy link to this message
-
Re: are we able to decommission multi nodes at one time?
It's alowable to decommission multi nodes at the same time.
Just write the all the hostnames which will be decommissioned  to the
exclude file and run "bin/hadoop dfsadmin -refreshNodes".

However you need to ensure the decommissioned DataNodes are minority of all
the DataNodes in the cluster and the block replica can be guaranteed after
decommission.

For example, default replication level mapred.submit.replication=10.
So if you have less than 10 DataNodes after decommissioned, the decommision
process will hang.
2013/4/1 varun kumar <[EMAIL PROTECTED]>

> How many nodes do you have and replication factor for it.
>