You can configure the undocumented variable dfs.max-repl-streams to
increase the number of replications a data-node is allowed to handle
at one time. The default value is 2. 
On Fri, Aug 12, 2011 at 12:09 PM, Charles Wimmer <[EMAIL PROTECTED]> wrote:
> The balancer bandwidth setting does not affect decommissioning nodes. Decommisssioning nodes replicate as fast as the cluster is capable.
> The replication pace has many variables.
> Number nodes that are participating in the replication.
> The amount of network bandwidth each has.
> The amount of other HDFS activity at the time.
> Total blocks being replicated.
> Total data being replicated.
> Many others.
> On 8/12/11 8:58 AM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> Hi All,
> I'm trying to decommission data node from my cluster. I put the data node in the /usr/lib/hadoop/conf/dfs.hosts.exclude list and restarted the name nodes. The under-replicated blocks are starting to replicate, but it's going down in a very slow pace. For 1 TB of data it takes over 1 day to complete. We change the settings as below and try to increase the replication rate.
> Added this to hdfs-site.xml on all the nodes on the cluster and restarted the data nodes and name node processes.
> <!-- 100Mbit/s -->
> Speed didn't seem to pick up. Do you know what may be happening?
> This message is for the designated recipient only and may contain privileged, proprietary, or otherwise private information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the email by you is prohibited.