Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Child JVM memory allocation / Usage


Copy link to this message
-
Re: Child JVM memory allocation / Usage
io.sort.mb = 256 MB

On Monday, March 25, 2013, Harsh J wrote:

> The MapTask may consume some memory of its own as well. What is your
> io.sort.mb (MR1) or mapreduce.task.io.sort.mb (MR2) set to?
>
> On Sun, Mar 24, 2013 at 3:40 PM, nagarjuna kanamarlapudi
> <[EMAIL PROTECTED] <javascript:;>> wrote:
> > Hi,
> >
> > I configured  my child jvm heap to 2 GB. So, I thought I could really
> read
> > 1.5GB of data and store it in memory (mapper/reducer).
> >
> > I wanted to confirm the same and wrote the following piece of code in the
> > configure method of mapper.
> >
> > @Override
> >
> > public void configure(JobConf job) {
> >
> > System.out.println("FREE MEMORY -- "
> >
> > + Runtime.getRuntime().freeMemory());
> >
> > System.out.println("MAX MEMORY ---" + Runtime.getRuntime().maxMemory());
> >
> > }
> >
> >
> > Surprisingly the output was
> >
> >
> > FREE MEMORY -- 341854864  = 320 MB
> > MAX MEMORY ---1908932608  = 1.9 GB
> >
> >
> > I am just wondering what processes are taking up that extra 1.6GB of heap
> > which I configured for the child jvm heap.
> >
> >
> > Appreciate in helping me understand the scenario.
> >
> >
> >
> > Regards
> >
> > Nagarjuna K
> >
> >
> >
>
>
>
> --
> Harsh J
>
--
Sent from iPhone
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB