Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Metadata size for 1 TB HDFS data?


+
Mohammad Tariq 2012-12-20, 12:01
+
Stephen Fritz 2012-12-20, 14:40
Copy link to this message
-
Re: Metadata size for 1 TB HDFS data?
Thank you so much for the valuable response Stephen. But I have a few
questions to ask here. Could you please elaborate a bit, if possible?

Each of the specified objects are totally different from each other. A file
will be smaller than a directory in size, and a directory might be smaller
than a block. They might have totally different attributes as well. But
still the space required by each object is same as the other. How is it
possible? Is there any formula or rule of thumb to calculate this?

Many thanks.

Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/
On Thu, Dec 20, 2012 at 8:10 PM, Stephen Fritz <[EMAIL PROTECTED]>wrote:

> Each block, file, and directory is an object in the namenodes heap, so it
> depends on how you're storing your data.  You may need to account for those
> in your calculations.
>
>
>
> On Thu, Dec 20, 2012 at 7:01 AM, Mohammad Tariq <[EMAIL PROTECTED]>wrote:
>
>> Hello group,
>>
>>         What could be the approx. size of the metadata if I have 1 TB of
>> data in my HDFS?I am not doing anything additional but just a simple put.
>> Will it be ((1*1024*1024)/64)*200 Bytes?
>> *Keeping 64M as the block size.
>>
>> Is my understanding right?Please correct me if i'm wrong.
>>
>> Many thanks.
>>
>> Best Regards,
>> Tariq
>> +91-9741563634
>> https://mtariq.jux.com/
>>
>
>