Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> HDFS drive, partition best practice


+
John Buchanan 2011-02-07, 20:25
+
Jonathan Disher 2011-02-07, 22:06
+
Scott Golby 2011-02-07, 22:40
+
John Buchanan 2011-02-08, 15:20
+
Allen Wittenauer 2011-02-08, 17:25
+
Adam Phelps 2011-02-08, 19:33
+
Allen Wittenauer 2011-02-08, 20:09
+
Patrick Angeles 2011-02-08, 20:17
+
Patrick Angeles 2011-02-08, 20:22
+
Allen Wittenauer 2011-02-08, 20:43
Copy link to this message
-
Re: HDFS drive, partition best practice
Interesting conversation. What is your default filesystem? Are you using ext3?
On Tue, Feb 8, 2011 at 3:22 PM, Patrick Angeles <[EMAIL PROTECTED]> wrote:
> OT:
> Allen, did you turn down a job offer from Google or something? GMail sends
> everything from you straight to the spam folder.
>
> On Tue, Feb 8, 2011 at 12:17 PM, Patrick Angeles <[EMAIL PROTECTED]>
> wrote:
>>
>>
>> On Tue, Feb 8, 2011 at 12:09 PM, Allen Wittenauer
>> <[EMAIL PROTECTED]> wrote:
>>>
>>> On Feb 8, 2011, at 11:33 AM, Adam Phelps wrote:
>>>
>>> > On 2/7/11 2:06 PM, Jonathan Disher wrote:
>>> >> Currently I have a 48 node cluster using Dell R710's with 12 disks -
>>> >> two
>>> >> 250GB SATA drives in RAID1 for OS, and ten 1TB SATA disks as a JBOD
>>> >> (mounted on /data/0 through /data/9) and listed separately in
>>> >> hdfs-site.xml. It works... mostly. The big issues you will encounter
>>> >> is
>>> >> losing a disk - the DataNode process will crash, and if you comment
>>> >> out
>>> >> the affected drive, when you replace it you will have 9 disks full to
>>> >> N%
>>> >> and one empty disk.
>>> >
>>> > If DataNode is going down after a single disk failure then you probably
>>> > haven't set dfs.datanode.failed.volumes.tolerated in hdfs-site.xml.  You can
>>> > up that number to allow DataNode to tolerate dead drives.
>>>
>>> a) only if you have a version that supports it
>>>
>>> b) that only protects you on the DN side.  The TT is, AFAIK, still
>>> susceptible to drive failures.
>>
>> c) And it only works when the drive fails on read (HDFS-457), not on write
>> (HDFS-1273).
>>
>
>
+
Patrick Angeles 2011-02-08, 19:53
+
Bharath Mundlapudi 2011-02-08, 19:10
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB