Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Re: How to troubleshoot OutOfMemoryError


Copy link to this message
-
Re: How to troubleshoot OutOfMemoryError
周梦想 2012-12-24, 10:55
Short term of OutOfMemory :)

2012/12/24 Junior Mint <[EMAIL PROTECTED]>

> oom是什么哈哈
>
>
> On Mon, Dec 24, 2012 at 11:30 AM, 周梦想 <[EMAIL PROTECTED]> wrote:
>
>> I encountered the OOM problem, because i don't set ulimit open files
>> limit. It had nothing to do with Memory. Memory is sufficient.
>>
>> Best Regards,
>> Andy
>>
>>
>> 2012/12/22 Manoj Babu <[EMAIL PROTECTED]>
>>
>>> David,
>>>
>>> I faced the same issue due to too much of logging that fills the task
>>> tracker log folder.
>>>
>>> Cheers!
>>> Manoj.
>>>
>>>
>>> On Sat, Dec 22, 2012 at 9:10 PM, Stephen Fritz <[EMAIL PROTECTED]>wrote:
>>>
>>>> Troubleshooting OOMs in the map/reduce tasks can be tricky, see page 118
>>>> of Hadoop Operations<http://books.google.com/books?id=W5VWrrCOuQ8C&pg=PA123&lpg=PA123&dq=mapred+child+address+space+size&source=bl&ots=PCdqGFbU-Z&sig=ArgpJroU7UEmMqMB_hwXoCq7whk&hl=en&sa=X&ei=TNPVUMjjHsS60AGHtoHQDA&ved=0CEUQ6AEwAw#v=onepage&q=mapred%20child%20address%20space%20size&f=false>for a couple of settings which could affect the frequency of OOMs which
>>>> aren't necessarily intuitive.
>>>>
>>>> To answer your question about getting the heap dump, you should be able
>>>> to add "-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/some/path" to
>>>> your mapred.child.java.opts, then look for the heap dump in that path next
>>>> time you see the OOM.
>>>>
>>>>
>>>> On Fri, Dec 21, 2012 at 11:33 PM, David Parks <[EMAIL PROTECTED]>wrote:
>>>>
>>>>> I’m pretty consistently seeing a few reduce tasks fail with
>>>>> OutOfMemoryError (below). It doesn’t kill the job, but it slows it down.
>>>>> ****
>>>>>
>>>>> ** **
>>>>>
>>>>> In my current case the reducer is pretty darn simple, the algorithm
>>>>> basically does:****
>>>>>
>>>>> **1.       **Do you have 2 values for this key?****
>>>>>
>>>>> **2.       **If so, build a json string and emit a NullWritable and
>>>>> Text value.****
>>>>>
>>>>> ** **
>>>>>
>>>>> The string buffer I use to build the json is re-used, and I can’t see
>>>>> anywhere in my code that would be taking more than ~50k of memory at any
>>>>> point in time.****
>>>>>
>>>>> ** **
>>>>>
>>>>> But I want to verify, is there a way to get the heap dump and all
>>>>> after this error? I’m running on AWS MapReduce v1.0.3 of Hadoop.****
>>>>>
>>>>> ** **
>>>>>
>>>>> Error: java.lang.OutOfMemoryError: Java heap space****
>>>>>
>>>>>         at
>>>>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.shuffleInMemory(ReduceTask.java:1711)
>>>>> ****
>>>>>
>>>>>         at
>>>>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutput(ReduceTask.java:1571)
>>>>> ****
>>>>>
>>>>>         at
>>>>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:1412)
>>>>> ****
>>>>>
>>>>>         at
>>>>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:1344)
>>>>> ****
>>>>>
>>>>> ** **
>>>>>
>>>>> ** **
>>>>>
>>>>
>>>>
>>>
>>
>