Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Used Heap in Namenode & dfs.replication


Copy link to this message
-
Re: Used Heap in Namenode & dfs.replication
Patai Sangbutsarakum 2012-10-12, 16:26
Thanks Harsh.

This is from webui
14591213 files and directories, 16191821 blocks = 30783034 total. Heap
Size is 9.3 GB / 34.72 GB (26%)

this is from jmx
"name": "java.lang:type=Memory",
"modelerType": "sun.management.MemoryImpl",
"Verbose": false,
"HeapMemoryUsage": {

    "committed": 24427036672,
    "init": 791179584,
    "max": 37282709504,
    "used": 21456071792

},

I'm hoping that i'm thinking if i looked at the right spot in the jmx page.
I put Xmx 40G but in jmx it said 37 ish G
Thanks

On Fri, Oct 12, 2012 at 9:03 AM, Harsh J <[EMAIL PROTECTED]> wrote:
> Apache Hadoop 0.20.2 may not have accurate heap usage report on that
> web UI. See https://issues.apache.org/jira/browse/HDFS-94 for the fix
> we had to do. You may measure actual usage via either jmap -histo:live
> or via http://NNWebUI:PORT/jmx if thats available (shows some JVM
> metrics you can consume).
>
> On Fri, Oct 12, 2012 at 12:03 AM, Patai Sangbutsarakum
> <[EMAIL PROTECTED]> wrote:
>> Hi Hadoopers,
>>
>> I am looking at DFS' cluster summary.
>>
>> "14708427 files and directories, 16357951 blocks = 31066378 total"
>>
>> From White's book (2nd Edition) page 42. "As a rule of thumb, each
>> file, directory, and block takes about 150bytes".
>>
>> So, 31066378 * 150 byte => 4.34 GB
>>
>> The rest of the line is  Heap Size is 12.17 GB / 34.72 GB
>>
>> 12.17 vs. 4.34 is like 3 time bigger number. is that because of
>> replication of 3 ?
>>
>> Thanks
>> Patai
>>
>> I am on 0.20.2
>
>
>
> --
> Harsh J