Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: are we able to decommission multi nodes at one time?


Copy link to this message
-
Re: are we able to decommission multi nodes at one time?
:)

currently, I  have 15 data nodes.
for some tests, I am trying to decommission until 8 nodes.

Now, the total dfs used size is 52 TB which is including all replicated blocks.
from 15 to 8, total spent time is almost 4 days long. ;(

someone mentioned that I don't need to decommission node by node.
for this case, is there no problems if I decommissioned 7 nodes at the same time?
2013. 4. 2., 오후 12:14, Azuryy Yu <[EMAIL PROTECTED]> 작성:

> I can translate it to native English: how many nodes you want to decommission?
>
>
> On Tue, Apr 2, 2013 at 11:01 AM, Yanbo Liang <[EMAIL PROTECTED]> wrote:
> You want to decommission how many nodes?
>
>
> 2013/4/2 Henry JunYoung KIM <[EMAIL PROTECTED]>
> 15 for datanodes and 3 for replication factor.
>
> 2013. 4. 1., 오후 3:23, varun kumar <[EMAIL PROTECTED]> 작성:
>
> > How many nodes do you have and replication factor for it.
>
>
>