Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Hadoopn1.03 There is insufficient memory for the Java Runtime Environment to continue.


Copy link to this message
-
Re: Hadoopn1.03 There is insufficient memory for the Java Runtime Environment to continue.
Arpit Gupta 2012-10-08, 15:29

i would recommended using the oracle jdk. Also i tried your configs on a single node setup of 1.0.3 and the mr jobs went through. So i suspect this is something specific to your env.

Also from your email below you mention that mapred.child.java.opts and mapred.child.ulimit were added to try to solve this problem. Are you setting memory settings for your map and reduce tasks? It might help if you share the full mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/

On Oct 8, 2012, at 2:45 AM, Attila Csordas <[EMAIL PROTECTED]> wrote:

> OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
>
> might the official Oracle Java be better?
>
> Thanks,
> Attila
>
> On Sun, Oct 7, 2012 at 8:37 PM, Arpit Gupta <[EMAIL PROTECTED]> wrote:
>> are you using 32bit jdk for your task trackers?
>>
>> If so reduce the mem setting in mapred.child.java.opts
>>
>> --
>> Arpit
>>
>> On Oct 7, 2012, at 12:29 PM, Attila Csordas <[EMAIL PROTECTED]> wrote:
>>
>>> <property>
>>> <name>mapred.tasktracker.map.tasks.maximum</name>
>>> <value>10</value>
>>> </property>
>>>
>>> <property>
>>> <name>mapred.tasktracker.reduce.tasks.maximum</name>
>>> <value>6</value>
>>> </property>
>>>
>>> Cheers,
>>> Attila
>>>
>>> On Sun, Oct 7, 2012 at 6:34 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>>>> Hi,
>>>>
>>>> What is your # of slots per TaskTracker? Your ulimit seems pretty
>>>> high. I'd set it to 1.5x times heap initially, i.e., 6291456 (6 GB)
>>>> and try.
>>>>
>>>> On Sun, Oct 7, 2012 at 3:50 AM, Attila Csordas <[EMAIL PROTECTED]> wrote:
>>>>> some details to this problem:
>>>>>
>>>>> 12/10/05 12:13:27 INFO mapred.JobClient:  map 0% reduce 0%
>>>>> 12/10/05 12:13:40 INFO mapred.JobClient: Task Id :
>>>>> attempt_201210051158_0001_m_000002_0, Status : FAILED
>>>>> java.lang.Throwable: Child Error
>>>>>       at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>>>> Caused by: java.io.IOException: Task process exit with nonzero status of 134.
>>>>>       at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>>>
>>>>> attempt_201210051158_0001_m_000002_0: #
>>>>> attempt_201210051158_0001_m_000002_0: # There is insufficient memory
>>>>> for the Java Runtime Environment to continue.
>>>>> attempt_201210051158_0001_m_000002_0: # pthread_getattr_np
>>>>>
>>>>> in mapred-site.xml the following memory settings were set after a
>>>>> couple trials to get rid of the problem this way:
>>>>>
>>>>> <property>
>>>>> <name>mapred.child.java.opts</name>
>>>>> <value>-server -Xmx4096M -Djava.net.preferIPv4Stack=true</value>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>> <name>mapred.child.ulimit</name>
>>>>> <value>16777216</value>
>>>>> </property>
>>>>>
>>>>> Cheers,
>>>>> Attila
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Oct 5, 2012 at 10:50 AM, Steve Lewis <[EMAIL PROTECTED]> wrote:
>>>>>> [We get 'There is insufficient memory for the Java Runtime Environment to
>>>>>> continue.'
>>>>>> any time we run any job including the most trivial word count process. It is
>>>>>> true I am generating a jar for a larger job but only running a version of
>>>>>> wordcount that worked well under 0.2
>>>>>> Any bright ideas???
>>>>>> This is a new 1.03 installation and nothing is known to work
>>>>>>
>>>>>> Steven M. Lewis PhD
>>>>>> 4221 105th Ave NE
>>>>>> Kirkland, WA 98033
>>>>>> cell 206-384-1340
>>>>>> skype lordjoe_com
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J