Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Is there any method to estimate how many files metadata I can store in name node ?


Copy link to this message
-
Re: Is there any method to estimate how many files metadata I can store in name node ?
Thanks Patrick。

On Thu, Jun 10, 2010 at 12:47 AM, Patrick Angeles <[EMAIL PROTECTED]> wrote:
> Hey Jeff,
> A rough (but pretty good) estimation is 1GB per 1M blocks. So if you have
> files that average 1.2GB each and you have a dfs.block.size of 128MB, then
> it would take roughly 10 blocks to store that file, and you would be able to
> store 100k of those files in 1GB of RAM.
> Out of 8GB, I'd assume 7GB is usable for HDFS metadata. The remaining 1GB is
> for operational overhead -- this is probably generous but it's better to
> over-estimate when you're capacity planning.
> Cheers,
> - Patrick
> On Wed, Jun 9, 2010 at 5:57 AM, Jeff Zhang <[EMAIL PROTECTED]> wrote:
>>
>> Hi all,
>>
>> I'd like to estimate how many files I can store in the my name node
>> with 8G memory, Is there any estimation method ?
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>
>

--
Best Regards

Jeff Zhang
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB