-Re: Hadoopn1.03 There is insufficient memory for the Java Runtime Environment to continue.
ulimit was set to
-Xmx4096M stayed for heap
still getting the very same error
any other tips?
On Sun, Oct 7, 2012 at 6:34 AM, Harsh J <[EMAIL PROTECTED]> wrote:
> What is your # of slots per TaskTracker? Your ulimit seems pretty
> high. I'd set it to 1.5x times heap initially, i.e., 6291456 (6 GB)
> and try.
> On Sun, Oct 7, 2012 at 3:50 AM, Attila Csordas <[EMAIL PROTECTED]> wrote:
>> some details to this problem:
>> 12/10/05 12:13:27 INFO mapred.JobClient: map 0% reduce 0%
>> 12/10/05 12:13:40 INFO mapred.JobClient: Task Id :
>> attempt_201210051158_0001_m_000002_0, Status : FAILED
>> java.lang.Throwable: Child Error
>> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of 134.
>> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>> attempt_201210051158_0001_m_000002_0: #
>> attempt_201210051158_0001_m_000002_0: # There is insufficient memory
>> for the Java Runtime Environment to continue.
>> attempt_201210051158_0001_m_000002_0: # pthread_getattr_np
>> in mapred-site.xml the following memory settings were set after a
>> couple trials to get rid of the problem this way:
>> <value>-server -Xmx4096M -Djava.net.preferIPv4Stack=true</value>
>> On Fri, Oct 5, 2012 at 10:50 AM, Steve Lewis <[EMAIL PROTECTED]> wrote:
>>> [We get 'There is insufficient memory for the Java Runtime Environment to
>>> any time we run any job including the most trivial word count process. It is
>>> true I am generating a jar for a larger job but only running a version of
>>> wordcount that worked well under 0.2
>>> Any bright ideas???
>>> This is a new 1.03 installation and nothing is known to work
>>> Steven M. Lewis PhD
>>> 4221 105th Ave NE
>>> Kirkland, WA 98033
>>> cell 206-384-1340
>>> skype lordjoe_com
> Harsh J