Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: are we able to decommission multi nodes at one time?


Copy link to this message
-
Re: are we able to decommission multi nodes at one time?
:)

currently, I  have 15 data nodes.
for some tests, I am trying to decommission until 8 nodes.

Now, the total dfs used size is 52 TB which is including all replicated blocks.
from 15 to 8, total spent time is almost 4 days long. ;(

someone mentioned that I don't need to decommission node by node.
for this case, is there no problems if I decommissioned 7 nodes at the same time?
2013. 4. 2., 오후 12:14, Azuryy Yu <[EMAIL PROTECTED]> 작성:

> I can translate it to native English: how many nodes you want to decommission?
>
>
> On Tue, Apr 2, 2013 at 11:01 AM, Yanbo Liang <[EMAIL PROTECTED]> wrote:
> You want to decommission how many nodes?
>
>
> 2013/4/2 Henry JunYoung KIM <[EMAIL PROTECTED]>
> 15 for datanodes and 3 for replication factor.
>
> 2013. 4. 1., 오후 3:23, varun kumar <[EMAIL PROTECTED]> 작성:
>
> > How many nodes do you have and replication factor for it.
>
>
>

+
Yanbo Liang 2013-04-01, 11:17
+
Henry JunYoung KIM 2013-04-02, 01:35
+
Azuryy Yu 2013-04-03, 01:53
+
Yanbo Liang 2013-04-03, 06:04
+
Azuryy Yu 2013-04-03, 08:18
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB