Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Namenode memory usage


Copy link to this message
-
Re: Namenode memory usage
Chris, Thanks a million; it's the huge relieve.
My next action is turn on the GC and verify if it ever goes to full GC a
lot.

I do appreciate your help.
On Mon, Jun 17, 2013 at 2:56 PM, Chris Nauroth <[EMAIL PROTECTED]>wrote:

> Hello Patai,
>
> The numbers you see there (23.34 GB / 38.54 GB) are the JVM total memory /
> max memory.
>
> The max memory is always going to be equivalent to your -Xmx setting
> (40000m).  This is the maximum amount of memory that the JVM will attempt
> to allocate from the OS.
>
> The total memory is the amount of memory that the JVM has allocated right
> now.  This value starts at the value you specified for -Xms (or a low
> default if -Xms is unspecified).  Then, the JVM allocates memory lazily
> throughout the lifetime of the process.  Over time, you'll see the total
> memory gradually grow as needed, eventually stopping at the value of max
> memory.  For the JVMs I've worked with, total memory never goes down (the
> JVM doesn't return memory during the process lifetime), but I believe this
> part is implementation-specific, so you might see different behavior on a
> different JVM.
>
> Relating this back to your original question, I don't think these numbers
> alone strongly indicate a need to upgrade RAM.  If total memory is 23GB,
> then it hasn't yet attempted to use the full 40GB that you've deployed.  If
> you're concerned about this though, you can gather more detailed
> information by enabling GC logging on the process.  If you see a lot of
> full GCs, and it appears that there is still very little memory remaining
> after full GC, then that's a stronger indicator that the process needs more
> RAM.
>
> Hope this helps,
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Mon, Jun 17, 2013 at 1:03 PM, Patai Sangbutsarakum <
> [EMAIL PROTECTED]> wrote:
>
>> Hi Hadoopers,
>>
>> My dedicated Namenode box has 48G of memory, 40G is allocated for NN. HEAP
>>
>> This is from 50070/dfshealth.jsp
>> *28540193 files and directories, 32324098 blocks = 60864291 total. Heap
>> Size is 23.34 GB / 38.54 GB (60%) *
>>
>> *The Heap is fluctuating between less than 20G up to almost 100%*
>> *
>> *
>> *
>> *
>> However, from top command Residence size is constantly at 39G not matter
>> how low of the memory usage in defshealth.jsp page
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>
>>  4628 apps      19   0 40.4g  39g  22m S 132.9 83.9  44821:08
>> /usr/java/jdk/jre/bin/java -Dproc_namenode -Xmx40000m
>>
>>
>> Is this the time to upgrade the ram to the namenode box?
>>
>> I remember the easy rule of thumb is 150 bye of every 1M for
>> blocks+file+dir, so *60864291 * 150byte is around 9G. I just don't
>> understand why 40G seems to be used up.?*
>> *Please educate..*
>> *
>> *
>> *Hope this make sense*
>> *P*
>>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB