-Re: Out of memory (heap space) errors on job tracker
Arun C Murthy 2012-06-08, 18:59
This shouldn't be happening at all...
What version of hadoop are you running? Potentially you need configs to protect the JT that you are missing, those should ensure your hadoop-1.x JT is very reliable.
On Jun 8, 2012, at 8:26 AM, David Rosenstrauch wrote:
> Our job tracker has been seizing up with Out of Memory (heap space) errors for the past 2 nights. After the first night's crash, I doubled the heap space (from the default of 1GB) to 2GB before restarting the job. After last night's crash I doubled it again to 4GB.
> This all seems a bit puzzling to me. I wouldn't have thought that the job tracker should require so much memory. (The NameNode, yes, but not the job tracker.)
> Just wondering if this behavior sounds reasonable, or if perhaps there might be a bigger problem at play here. Anyone have any thoughts on the matter?
Arun C. Murthy