Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: Throttle replication speed in case of datanode failure


Copy link to this message
-
Re: Throttle replication speed in case of datanode failure
You can limit the bandwidth in bytes/second values applied
via dfs.balance.bandwidthPerSec in each DN's hdfs-site.xml. Default is 1
MB/s (1048576).

Also, unsure if your version already has it, but it can be applied at
runtime too via the dfsadmin -setBalancerBandwidth command.
On Thu, Jan 17, 2013 at 8:11 PM, Brennon Church <[EMAIL PROTECTED]> wrote:

> Hello,
>
> Is there a way to throttle the speed at which under-replicated blocks are
> copied across a cluster?  Either limiting the bandwidth or the number of
> blocks per time period would work.
>
> I'm currently running Hadoop v1.0.1.  I think the
> dfs.namenode.replication.work.multiplier.per.iteration option would do the
> trick, but that is in v1.1.0 and higher.
>
> Thanks.
>
> --Brennon
>

--
Harsh J
+
Brennon Church 2013-01-17, 21:44
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB