Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> hadoop memory settings

Copy link to this message
Re: hadoop memory settings
Hi Sadak

AFAIK HADOOP_HEAPSIZE determines the jvm size of the daemons like NN,JT,TT,DN etc.

 mapred.child.java.opts and mapred.child.ulimit is used to set the jvm heap for child jvms launched for each map/reduce task launched.

Bejoy KS

Sent from handheld, please excuse typos.

-----Original Message-----
From: Visioner Sadak <[EMAIL PROTECTED]>
Date: Fri, 5 Oct 2012 13:47:24
Subject: Re: hadoop memory settings

coz i m getting Error occurred during initialization of VM hadoop
java.lang.Throwable: Child Error At org.apache.hadoop.mapred.TaskRunner.run
whe running a job.....:)

On Fri, Oct 5, 2012 at 1:39 PM, Visioner Sadak <[EMAIL PROTECTED]>wrote:

> Is ther a relation between HADOOP_HEAPSIZE mapred.child.java.opts and
> mapred.child.ulimit settings in hadoop-env.sh and mapred-site.xml i have a
> sinngle machine with 2gb ram and running hadoop on psuedo distr mode my
> HADOOP_HEAPSIZE is set to 256 wat shud i set mapred.child.java.opts and
> mapred.child.ulimit and how these settings are calculated if my ram is
> incresed or machine clusters are increased