Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Throttle replication speed in case of datanode failure


Copy link to this message
-
Re: Throttle replication speed in case of datanode failure
Since this is a Hadoop question, it should be sent
[EMAIL PROTECTED] (which I'm now sending this to and I put
user@hbase in BCC).

J-D

On Thu, Jan 17, 2013 at 9:54 AM, Brennon Church <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Is there a way to throttle the speed at which under-replicated blocks are
> copied across a cluster?  Either limiting the bandwidth or the number of
> blocks per time period would work.
>
> I'm currently running Hadoop v1.0.1.  I think the
> dfs.namenode.replication.work.multiplier.per.iteration option would do the
> trick, but that is in v1.1.0 and higher.
>
> Thanks.
>
> --Brennon
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB