Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Hadoop hardware failure recovery


Copy link to this message
-
Re: Hadoop hardware failure recovery
Hello Aji,

   Hadoop's redundancy feature allows data to be replicated over the
entire cluster. So, even if entire disk is gone or even the entire
machine for that matter, your data is still there in other node(s).
But, we need to keep one thing in mind that the 'master' node is the
single point of failure in a Hadoop cluster. If the machine running
master process(es) is down, you are trapped. For more detail you can
visit the home page at : redundancy feature

Regards,
    Mohammad Tariq
On Sat, Aug 11, 2012 at 12:08 AM, Aji Janis <[EMAIL PROTECTED]> wrote:
> I am very new to Hadoop. I am considering setting up a Hadoop cluster
> consisting of 5 nodes where each node has 3 internal hard drives. I
> understand HDFS has a configurable redundancy feature but what happens if an
> entire drive crashes (physically) for whatever reason? How does Hadoop
> recover, if it can, from this situation? What else should I know before
> setting up my cluster this way? Thanks in advance.
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB