Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Re: are we able to decommission multi nodes at one time?


+
Harsh J 2013-04-02, 07:54
Copy link to this message
-
Re: are we able to decommission multi nodes at one time?
the rest of nodes to be alive has enough size to store.

for this one that you've mentioned.
> its easier to do so in a rolling manner without need of a
> decommission.

to check my understanding, just shutting down 2 of them and then 2 more and then 2 more without decommissions.

is this correct?
2013. 4. 2., 오후 4:54, Harsh J <[EMAIL PROTECTED]> 작성:

> Note though that its only possible to decommission 7 nodes at the same
> time and expect it to finish iff the remaining 8 nodes have adequate
> free space for the excess replicas.
>
> If you're just going to take them down for a short while (few mins
> each), its easier to do so in a rolling manner without need of a
> decommission. You can take upto two down at a time on a replication
> average of 3 or 3+, and put it back in later without too much data
> movement impact.
>
> On Tue, Apr 2, 2013 at 1:06 PM, Yanbo Liang <[EMAIL PROTECTED]> wrote:
>> It's reasonable to decommission 7 nodes at the same time.
>> But may be it also takes long time to finish it.
>> Because all the replicas in these 7 nodes need to be copied to remaining 8
>> nodes.
>> The size of transfer from these nodes to the remaining nodes is equal.
>>
>>
>> 2013/4/2 Henry Junyoung Kim <[EMAIL PROTECTED]>
>>>
>>> :)
>>>
>>> currently, I  have 15 data nodes.
>>> for some tests, I am trying to decommission until 8 nodes.
>>>
>>> Now, the total dfs used size is 52 TB which is including all replicated
>>> blocks.
>>> from 15 to 8, total spent time is almost 4 days long. ;(
>>>
>>> someone mentioned that I don't need to decommission node by node.
>>> for this case, is there no problems if I decommissioned 7 nodes at the
>>> same time?
>>>
>>>
>>> 2013. 4. 2., 오후 12:14, Azuryy Yu <[EMAIL PROTECTED]> 작성:
>>>
>>> I can translate it to native English: how many nodes you want to
>>> decommission?
>>>
>>>
>>> On Tue, Apr 2, 2013 at 11:01 AM, Yanbo Liang <[EMAIL PROTECTED]> wrote:
>>>>
>>>> You want to decommission how many nodes?
>>>>
>>>>
>>>> 2013/4/2 Henry JunYoung KIM <[EMAIL PROTECTED]>
>>>>>
>>>>> 15 for datanodes and 3 for replication factor.
>>>>>
>>>>> 2013. 4. 1., 오후 3:23, varun kumar <[EMAIL PROTECTED]> 작성:
>>>>>
>>>>>> How many nodes do you have and replication factor for it.
>>>>>
>>>>
>>>
>>>
>>
>
>
>
> --
> Harsh J
+
Harsh J 2013-04-02, 08:37
+
Henry Junyoung Kim 2013-04-02, 09:07
+
Henry Junyoung Kim 2013-04-03, 01:43
+
Azuryy Yu 2013-04-03, 01:51
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB