Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # dev - JobTracker memory usage peaks once a day and OOM sometimes

Copy link to this message
Re: JobTracker memory usage peaks once a day and OOM sometimes
Allen Wittenauer 2011-02-08, 20:16

On Feb 8, 2011, at 8:59 AM, Maxim Zizin wrote:

> Hi all,
> We monitor JT, NN and SNN memory usage and observe the following behavior in our Hadoop cluster. JT's heap size is set to 2000m. About 18 hours a day it uses ~1GB but every day roughly at the minute it was started its used memory increases to ~1.5GB and then decreases back to ~1GB in about 6 hours. Sometimes this takes a bit more than 6 hours, sometimes a bit less. I was wondering whether anyone here knows what JT does once a day that makes it use 1.5 times more memory than normally.
> We're so interested in JT memory usage because during last two weeks we twice had JT getting out of heap space. Both times right after those daily used memory peaks when it was going down from 1.5GB to 1GB it started increasing again until got stuck at ~2.2GB. After that it becomes unresponsive and we have to restart it.
> We're using Cloudera's CDH2 version 0.20.1+169.113.

Who knows what is happening in the CDH release?

But in the normal job tracker, keep in mind that memory is consumed by every individual task listed on the main page.  If you have some jobs that have extremely high task counts or a lot of counters or really long names or ..., then that is likely your problem.  Chances are good you have a handful of jobs that are bad citizens that are getting scrolled off the page at the same time every day.

Also, for any grid of any significant size, 2g of heap is way too small.