Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Sane max storage size for DN


+
Mohammad Tariq 2012-12-12, 15:02
+
Ted Dunning 2012-12-12, 15:44
+
Mohammad Tariq 2012-12-12, 15:52
+
Michael Segel 2012-12-12, 18:58
Copy link to this message
-
Re: Sane max storage size for DN
Hello Michael,

      It's an array. The actual size of the data could be somewhere around
9PB(exclusive of replication) and we want to keep the no of DNs as less as
possible. Computations are not too frequent, as I have specified earlier.
If I have 500TB in 1 DN, the no of DNs would be around 49. And, if the
block size is 128MB, the no of blocks would be 201326592. So, I was
thinking of having 256GB RAM for the NN. Does this make sense to you?

Many thanks.

Regards,
    Mohammad Tariq

On Thu, Dec 13, 2012 at 12:28 AM, Michael Segel
<[EMAIL PROTECTED]>wrote:

> 500 TB?
>
> How many nodes in the cluster? Is this attached storage or is it in an
> array?
>
> I mean if you have 4 nodes for a total of 2PB, what happens when you lose
> 1 node?
>
>
> On Dec 12, 2012, at 9:02 AM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>
> Hello list,
>
>           I don't know if this question makes any sense, but I would like
> to ask, does it make sense to store 500TB (or more) data in a single DN?If
> yes, then what should be the spec of other parameters *viz*. NN & DN RAM,
> N/W etc?If no, what could be the alternative?
>
> Many thanks.
>
> Regards,
>     Mohammad Tariq
>
>
>
>