Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Increasing Java Heap Space in Slave Nodes


Copy link to this message
-
Re: Increasing Java Heap Space in Slave Nodes
You can pass that config set as part of your job (jobConf.set(…) or
job.getConfiguration().set(…)). Alternatively, if you implement Tool,
and use its grabbed Configruation, you can also pass it via
-Dname=value argument when running the job (the option has to precede
any custom options).

On Sat, Sep 7, 2013 at 2:06 AM, Arko Provo Mukherjee
<[EMAIL PROTECTED]> wrote:
> Hello All,
>
> I am running my job on a Hadoop Cluster and it fails due to insufficient
> Java Heap Memory.
>
> I searched in google, and found that I need to add the following into the
> conf files:
>   <property>
>     <name>mapred.child.java.opts</name>
>     <value>-Xmx2000m</value>
>   </property>
>
> However, I don't want to request the administrator to change settings as it
> is a long process.
>
> Is there a way I can ask Hadoop to use more Heap Space in the Slave nodes
> without changing the conf files via some command line parameter?
>
> Thanks & regards
> Arko

--
Harsh J