Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Hadoop hardware failure recovery


Copy link to this message
-
Re: Hadoop hardware failure recovery
Hadoop's file system was (mostly) copied from the concepts of Google's old
file system.

The original paper is probably the best way to learn about that.

http://research.google.com/archive/gfs.html

On Fri, Aug 10, 2012 at 11:38 AM, Aji Janis <[EMAIL PROTECTED]> wrote:

> I am very new to Hadoop. I am considering setting up a Hadoop cluster
> consisting of 5 nodes where each node has 3 internal hard drives. I
> understand HDFS has a configurable redundancy feature but what happens if
> an entire drive crashes (physically) for whatever reason? How does Hadoop
> recover, if it can, from this situation? What else should I know before
> setting up my cluster this way? Thanks in advance.
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB