Rack awareness will help, but it's a "best effort" rather than
guaranteed replication. Over time the cluster will converge to having
at least one replica on each rack, but even just normal block churn
can result in significant time periods where rack replication policy
is violated. The issue becomes worse if you lose one of those 10
servers and rereplication happens -- the rereplication can take hours.
Depending on your use case, you could
1. run the 10 servers with dfs.data.dir on one (or several) EBS volume(s).
2. replicate your data to S3. (There's no plumbing in HDFS to do this
3. run as two separate clusters (10 nodes in one, 500 in another) and
distcp between them.
As you can see from those suggestions, HDFS really isn't designed with
this scenario in mind...
On Tue, Dec 11, 2012 at 5:33 AM, Harsh J <[EMAIL PROTECTED]> wrote:
> Rack awareness with replication factor of 3 on files will help.
> You could declare two racks, one carrying these 10 nodes, and default rack
> for the rest of them, and the rack-aware default block placement policy will
> take care of the rest.
> On Dec 11, 2012 5:10 PM, "David Parks" <[EMAIL PROTECTED]> wrote:
>> Assume for a moment that you have a large cluster of 500 AWS spot instance
>> servers running. And you want to keep the bid price low, so at some point
>> it’s likely that the whole cluster will get axed until the spot price comes
>> down some.
>> In order to maintain HDFS continuity I’d want say 10 servers running as
>> normal instances, and I’d want to ensure that HDFS is replicating 100% of
>> data to those 10 that don’t run the risk of group elimination.
>> Is it possible for HDFS to ensure replication to these “primary” nodes?