Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Can we declare some HDFS nodes "primary"

Copy link to this message
Re: Can we declare some HDFS nodes "primary"
Rack awareness with replication factor of 3 on files will help.

You could declare two racks, one carrying these 10 nodes, and default rack
for the rest of them, and the rack-aware default block placement policy
will take care of the rest.
On Dec 11, 2012 5:10 PM, "David Parks" <[EMAIL PROTECTED]> wrote:

> Assume for a moment that you have a large cluster of 500 AWS *spot
> instance* servers running. And you want to keep the bid price low, so at
> some point it’s likely that the whole cluster will get axed until the spot
> price comes down some.****
> ** **
> In order to maintain HDFS continuity I’d want say 10 servers running as
> normal instances, and I’d want to ensure that HDFS is replicating 100% of
> data to those 10 that don’t run the risk of group elimination.****
> ** **
> Is it possible for HDFS to ensure replication to these “primary” nodes?***
> *
> ** **