Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Hadoop JobTracker Hanging

Copy link to this message
Re: Hadoop JobTracker Hanging
There are two issues which were fixed in 0.21.0  and can cause job tracker
to run out of memory:




We've been hit by MAPREDUCE-841  (large jobConf objects with large number of
tasks, especially when running pig jobs) a number of times in hadoop 0.20.1,

The current workarounds are:

a) Be careful about what you store in jobConf object
b)  Understand and control the largest number of mappers/reducers that can
be queued at any time for processing.
c) Provide lot of RAM to jobTracker

We use (c) to save on debugging man hours most of the time :).


On Tue, Jun 22, 2010 at 8:53 AM, Allen Wittenauer

> On Jun 22, 2010, at 3:17 AM, Steve Loughran wrote:
> >
> > I'm surprised its the JT that is OOM-ing, anecdotally its the NN and 2ary
> NN that use more, especially if the files are many and the blocksize small.
> the JT should not be tracking that much data over time
> Pre-0.20.2, there are definitely bugs with how the JT history is handled,
> causing some memory leakage.
> The other fairly common condition is if you have way too many tasks per
> job.  This is usually an indication that your data layout is way out of
> whack (too little data in too many files) or that you should be using
> CombinedFileInputFormat.