Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Used Heap in Namenode & dfs.replication


Copy link to this message
-
Used Heap in Namenode & dfs.replication
Hi Hadoopers,

I am looking at DFS' cluster summary.

"14708427 files and directories, 16357951 blocks = 31066378 total"

>From White's book (2nd Edition) page 42. "As a rule of thumb, each
file, directory, and block takes about 150bytes".

So, 31066378 * 150 byte => 4.34 GB

The rest of the line is  Heap Size is 12.17 GB / 34.72 GB

12.17 vs. 4.34 is like 3 time bigger number. is that because of
replication of 3 ?

Thanks
Patai

I am on 0.20.2
+
Harsh J 2012-10-12, 16:03
+
Patai Sangbutsarakum 2012-10-12, 16:26
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB