Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Hadoop hardware failure recovery


Copy link to this message
-
Re: Hadoop hardware failure recovery
Yep, hadoop-2 is alpha but is progressing nicely...

However, if you have access to some 'enterprise HA' utilities (VMWare or Linux HA) you can get *very decent* production-grade high-availability in hadoop-1.x too (both NameNode for HDFS and JobTracker for MapReduce).

Arun

On Aug 10, 2012, at 12:12 PM, anil gupta wrote:

> Hi Aji,
>
> Adding onto whatever Mohammad Tariq said, If you use Hadoop 2.0.0-Alpha then Namenode is not a single point of failure.However, Hadoop 2.0.0 is not of production quality yet(its in Alpha).
> Namenode use to be a Single Point of Failure in releases prior to Hadoop 2.0.0.
>
> HTH,
> Anil Gupta
>
> On Fri, Aug 10, 2012 at 11:55 AM, Ted Dunning <[EMAIL PROTECTED]> wrote:
> Hadoop's file system was (mostly) copied from the concepts of Google's old file system.
>
> The original paper is probably the best way to learn about that.
>
> http://research.google.com/archive/gfs.html
>
>
>
> On Fri, Aug 10, 2012 at 11:38 AM, Aji Janis <[EMAIL PROTECTED]> wrote:
> I am very new to Hadoop. I am considering setting up a Hadoop cluster consisting of 5 nodes where each node has 3 internal hard drives. I understand HDFS has a configurable redundancy feature but what happens if an entire drive crashes (physically) for whatever reason? How does Hadoop recover, if it can, from this situation? What else should I know before setting up my cluster this way? Thanks in advance.
>
>
>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/