Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Hadoopn1.03 There is insufficient memory for the Java Runtime Environment to continue.


Copy link to this message
-
Re: Hadoopn1.03 There is insufficient memory for the Java Runtime Environment to continue.
Hi,

What is your # of slots per TaskTracker? Your ulimit seems pretty
high. I'd set it to 1.5x times heap initially, i.e., 6291456 (6 GB)
and try.

On Sun, Oct 7, 2012 at 3:50 AM, Attila Csordas <[EMAIL PROTECTED]> wrote:
> some details to this problem:
>
> 12/10/05 12:13:27 INFO mapred.JobClient:  map 0% reduce 0%
> 12/10/05 12:13:40 INFO mapred.JobClient: Task Id :
> attempt_201210051158_0001_m_000002_0, Status : FAILED
> java.lang.Throwable: Child Error
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 134.
>         at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> attempt_201210051158_0001_m_000002_0: #
> attempt_201210051158_0001_m_000002_0: # There is insufficient memory
> for the Java Runtime Environment to continue.
> attempt_201210051158_0001_m_000002_0: # pthread_getattr_np
>
> in mapred-site.xml the following memory settings were set after a
> couple trials to get rid of the problem this way:
>
> <property>
> <name>mapred.child.java.opts</name>
> <value>-server -Xmx4096M -Djava.net.preferIPv4Stack=true</value>
> </property>
>
> <property>
> <name>mapred.child.ulimit</name>
> <value>16777216</value>
> </property>
>
> Cheers,
> Attila
>
>
>
> On Fri, Oct 5, 2012 at 10:50 AM, Steve Lewis <[EMAIL PROTECTED]> wrote:
>> [We get 'There is insufficient memory for the Java Runtime Environment to
>> continue.'
>> any time we run any job including the most trivial word count process. It is
>> true I am generating a jar for a larger job but only running a version of
>> wordcount that worked well under 0.2
>> Any bright ideas???
>> This is a new 1.03 installation and nothing is known to work
>>
>> Steven M. Lewis PhD
>> 4221 105th Ave NE
>> Kirkland, WA 98033
>> cell 206-384-1340
>> skype lordjoe_com

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB