Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Memory config for Hadoop cluster


Copy link to this message
-
Re: Memory config for Hadoop cluster
Amadeep,

Which scheduler are you using ?

Thanks
hemanth

On Tue, Nov 2, 2010 at 2:44 AM, Amandeep Khurana <[EMAIL PROTECTED]> wrote:
> How are the following configs supposed to be used?
>
> mapred.cluster.map.memory.mb
> mapred.cluster.reduce.memory.mb
> mapred.cluster.max.map.memory.mb
> mapred.cluster.max.reduce.memory.mb
> mapred.job.map.memory.mb
> mapred.job.reduce.memory.mb
>
> These were included in 0.20 in HADOOP-5881.
>
> Now, here's what I'm setting only the following out of the above in my
> mapred-site.xml:
>
> mapred.cluster.map.memory.mb=896
> mapred.cluster.reduce.memory.mb=1024
>
> When I run job, I get the following error:
>
>
> TaskTree [pid=1958,tipID=attempt_201011012101_0001_m_000000_0] is
> running beyond memory-limits. Current usage : 1358553088bytes. Limit :
> -1048576bytes. Killing task.
>
> I'm not sure how it got the Limit as -1048576bytes... Also, what are the
> cluster.max params supposed to be set as? Are they the max on the entire
> cluster or on a particular node?
>
> -Amandeep
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB