Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Throttle replication speed in case of datanode failure


+
Brennon Church 2013-01-17, 14:41
Copy link to this message
-
Re: Throttle replication speed in case of datanode failure
That doesn't seem to work for under-replicated blocks such as when
decommissioning (or losing) a node, just for the balancer.  I've got
mine currently set to 10MB/s, but am seeing rates of 3-4 times that
after decommissioning a node while it works on bringing things back up
to the proper replication factor.

Thanks.

--Brennon

On 1/17/13 11:04 AM, Harsh J wrote:
> You can limit the bandwidth in bytes/second values applied
> via dfs.balance.bandwidthPerSec in each DN's hdfs-site.xml. Default is
> 1 MB/s (1048576).
>
> Also, unsure if your version already has it, but it can be applied at
> runtime too via the dfsadmin -setBalancerBandwidth command.
>
>
> On Thu, Jan 17, 2013 at 8:11 PM, Brennon Church <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     Hello,
>
>     Is there a way to throttle the speed at which under-replicated
>     blocks are copied across a cluster?  Either limiting the bandwidth
>     or the number of blocks per time period would work.
>
>     I'm currently running Hadoop v1.0.1.  I think the
>     dfs.namenode.replication.work.multiplier.per.iteration option
>     would do the trick, but that is in v1.1.0 and higher.
>
>     Thanks.
>
>     --Brennon
>
>
>
>
> --
> Harsh J

+
Harsh J 2013-01-17, 21:41