Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Throttle replication speed in case of datanode failure


Copy link to this message
-
Re: Throttle replication speed in case of datanode failure
Pretty spiky.  I'll throttle it back to 1MB/s and see if it reduces
things as expected.

Thanks!

--Brennon

On 1/17/13 1:41 PM, Harsh J wrote:
> Not true per the sources, it controls all DN->DN copy/move rates,
> although the property name is misleading. Are you noticing a
> consistent rise in the rate or is it spiky?
>
>
> On Fri, Jan 18, 2013 at 2:20 AM, Brennon Church <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     That doesn't seem to work for under-replicated blocks such as when
>     decommissioning (or losing) a node, just for the balancer.  I've
>     got mine currently set to 10MB/s, but am seeing rates of 3-4 times
>     that after decommissioning a node while it works on bringing
>     things back up to the proper replication factor.
>
>     Thanks.
>
>     --Brennon
>
>
>     On 1/17/13 11:04 AM, Harsh J wrote:
>>     You can limit the bandwidth in bytes/second values applied
>>     via dfs.balance.bandwidthPerSec in each DN's hdfs-site.xml.
>>     Default is 1 MB/s (1048576).
>>
>>     Also, unsure if your version already has it, but it can be
>>     applied at runtime too via the dfsadmin -setBalancerBandwidth
>>     command.
>>
>>
>>     On Thu, Jan 17, 2013 at 8:11 PM, Brennon Church
>>     <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
>>
>>         Hello,
>>
>>         Is there a way to throttle the speed at which
>>         under-replicated blocks are copied across a cluster?  Either
>>         limiting the bandwidth or the number of blocks per time
>>         period would work.
>>
>>         I'm currently running Hadoop v1.0.1.  I think the
>>         dfs.namenode.replication.work.multiplier.per.iteration option
>>         would do the trick, but that is in v1.1.0 and higher.
>>
>>         Thanks.
>>
>>         --Brennon
>>
>>
>>
>>
>>     --
>>     Harsh J
>
>
>
>
> --
> Harsh J

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB