Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Throttle replication speed in case of datanode failure


Copy link to this message
-
Throttle replication speed in case of datanode failure
Hello,

Is there a way to throttle the speed at which under-replicated blocks are
copied across a cluster?  Either limiting the bandwidth or the number of
blocks per time period would work.

I'm currently running Hadoop v1.0.1.  I think the
dfs.namenode.replication.work.multiplier.per.iteration option would do the
trick, but that is in v1.1.0 and higher.

Thanks.

--Brennon
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB