Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Rack Awareness behaviour - Loss of rack


Copy link to this message
-
Re: Rack Awareness behaviour - Loss of rack
Prem,

Inline.

On Wed, Feb 8, 2012 at 12:58 AM, Jain, Prem <[EMAIL PROTECTED]> wrote:
> Team,
>
> I have  rack awareness configured and seems to work fine.  My default rep
> count is 2.  Now I lost one rack  due to switch failure. Here is what I
> observe
>
> HDFS  continues to write in the existing available rack. It still keeps two
> copies of each block, but now these blocks are being stored in the same
> rack.
>
> My questions:
>
> Is this the default HDFS behavior ?

Yes. This is due to the default block placement policy.

> What happens when the failed rack in back online ? Will HDFS automatically
> rewrite blocks to the other rack ?

No, would not do it automatically.

> Or – Do I have to run the rebalance to make that happen ?

Yes balancer may help. You'll also sometimes have to manually
re-enforce the block placement policy in the stable releases
presently, the policy violation recovery is not automatic:

hadoop fs -setrep -R 3 /
hadoop fs -setrep -R 2 /

HTH.

--
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB