Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> namenode memory test


+
自己 2013-04-24, 02:26
Copy link to this message
-
Re: namenode memory test
Can you manually go into the directory configured for hadoop.tmp.dir under
core-site.xml and do an ls -l to find the disk usage details, it will have
fsimage, edits, fstime, VERSION.
or the basic commands like,
hadoop fs -du
hadoop fsck

On Wed, Apr 24, 2013 at 7:56 AM, 自己 <[EMAIL PROTECTED]> wrote:

> Hi, I would like to know  how much memory our data take on the name-node
> per block, file and directory.
> For example, the metadata size of a file.
> When I store some files in HDFS,how can I get the memory size take on the
> name-node?
> Is there some tools or commands to test the memory size take on the
> name-node?
>
> I'm looking forward to your reply! Thanks!
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB