Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> HDFS drive, partition best practice


+
John Buchanan 2011-02-07, 20:25
+
Jonathan Disher 2011-02-07, 22:06
+
Scott Golby 2011-02-07, 22:40
+
John Buchanan 2011-02-08, 15:20
+
Allen Wittenauer 2011-02-08, 17:25
+
Adam Phelps 2011-02-08, 19:33
Copy link to this message
-
Re: HDFS drive, partition best practice

On Feb 8, 2011, at 11:33 AM, Adam Phelps wrote:

> On 2/7/11 2:06 PM, Jonathan Disher wrote:
>> Currently I have a 48 node cluster using Dell R710's with 12 disks - two
>> 250GB SATA drives in RAID1 for OS, and ten 1TB SATA disks as a JBOD
>> (mounted on /data/0 through /data/9) and listed separately in
>> hdfs-site.xml. It works... mostly. The big issues you will encounter is
>> losing a disk - the DataNode process will crash, and if you comment out
>> the affected drive, when you replace it you will have 9 disks full to N%
>> and one empty disk.
>
> If DataNode is going down after a single disk failure then you probably haven't set dfs.datanode.failed.volumes.tolerated in hdfs-site.xml.  You can up that number to allow DataNode to tolerate dead drives.

a) only if you have a version that supports it

b) that only protects you on the DN side.  The TT is, AFAIK, still susceptible to drive failures.
+
Patrick Angeles 2011-02-08, 20:17
+
Patrick Angeles 2011-02-08, 20:22
+
Allen Wittenauer 2011-02-08, 20:43
+
Mag Gam 2011-02-22, 12:34
+
Patrick Angeles 2011-02-08, 19:53
+
Bharath Mundlapudi 2011-02-08, 19:10