Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Namenode memory usage


Copy link to this message
-
Re: Namenode memory usage
Thanks Brahma,

I am kind of afraid to run the command, I had an issue on jobtracker early
this year. I launched the command and it caused the jobtracker stop
responding long enough till we need to roll the jobtracker instead. So i am
kind of afraid to run it on the production namenode.
Any suggestion is more than welcome.
On Mon, Jun 17, 2013 at 6:45 PM, Brahma Reddy Battula <
[EMAIL PROTECTED]> wrote:

>  Can you take heapdump and check ..? Here you can check which objects are
> using how much..
>
>
>
> Command : *jmap -histo:live namenodepid*
>  ------------------------------
> *From:* Personal [[EMAIL PROTECTED]]
> *Sent:* Tuesday, June 18, 2013 7:20 AM
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Namenode memory usage
>
>
>   E Lego
>
>
>  On Mon, Jun 17, 2013 at 1:04 PM, Patai Sangbutsarakum <
> [EMAIL PROTECTED]> wrote:
>
>> Hi Hadoopers,
>>
>>  My dedicated Namenode box has 48G of memory, 40G is allocated for NN.
>> HEAP
>>
>>  This is from 50070/dfshealth.jsp
>> *28540193 files and directories, 32324098 blocks = 60864291 total. Heap
>> Size is 23.34 GB / 38.54 GB (60%) *
>>
>>  *The Heap is fluctuating between less than 20G up to almost 100%*
>>  *
>> *
>> *
>> *
>> However, from top command Residence size is constantly at 39G not matter
>> how low of the memory usage in defshealth.jsp page
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>
>>  4628 apps      19   0 40.4g  39g  22m S 132.9 83.9  44821:08
>> /usr/java/jdk/jre/bin/java -Dproc_namenode -Xmx40000m
>>
>>
>>  Is this the time to upgrade the ram to the namenode box?
>>
>>  I remember the easy rule of thumb is 150 bye of every 1M for
>> blocks+file+dir, so *60864291 * 150byte is around 9G. I just don't
>> understand why 40G seems to be used up.?*
>> *Please educate..*
>> *
>> *
>> *Hope this make sense*
>> *P*
>>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB