Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: Child JVM memory allocation / Usage


+
Harsh J 2013-03-25, 04:56
+
nagarjuna kanamarlapudi 2013-03-25, 05:02
+
Hemanth Yamijala 2013-03-25, 06:31
+
nagarjuna kanamarlapudi 2013-03-25, 08:44
+
Hemanth Yamijala 2013-03-25, 13:09
+
Ted 2013-03-25, 01:27
+
nagarjuna kanamarlapudi 2013-03-25, 01:39
Copy link to this message
-
Re: Child JVM memory allocation / Usage
I configure those in hadoop-env.sh so I'm not sure about your configuration.

You can check with things like jconsole, or if you're coding it
anyways it's the third memory call in runtime, i.e. totalMemory.

On 3/25/13, nagarjuna kanamarlapudi <[EMAIL PROTECTED]> wrote:
> Hi Ted,
>
> As far as i can recollect, I onl configured these parameters
>
> <property>
>     <name>mapred.child.java.opts</name>
>     <value>-Xmx2048m</value>
>         <description>this number is the number of megabytes of memory that
> each mapper and each reducers will have available to use. If jobs start
> running out of heap space, this may need to be increased.</description>
> </property>
>
> <property>
>     <name>mapred.child.ulimit</name>
>     <value>3145728</value>
>         <description>this number is the number of kilobytes of memory that
> each mapper and each reducer will have available to use. If jobs start
> running out of heap space, this may need to be increased.</description>
> </property>
>
>
>
> On Mon, Mar 25, 2013 at 6:57 AM, Ted <[EMAIL PROTECTED]> wrote:
>
>> did you set the min heap size == your max head size? if you didn't,
>> free memory only shows you the difference between used and commit, not
>> used and max.
>>
>> On 3/24/13, nagarjuna kanamarlapudi <[EMAIL PROTECTED]>
>> wrote:
>> > Hi,
>> >
>> > I configured  my child jvm heap to 2 GB. So, I thought I could really
>> read
>> > 1.5GB of data and store it in memory (mapper/reducer).
>> >
>> > I wanted to confirm the same and wrote the following piece of code in
>> > the
>> > configure method of mapper.
>> >
>> > @Override
>> >
>> > public void configure(JobConf job) {
>> >
>> > System.out.println("FREE MEMORY -- "
>> >
>> > + Runtime.getRuntime().freeMemory());
>> >
>> > System.out.println("MAX MEMORY ---" +
>> > Runtime.getRuntime().maxMemory());
>> >
>> > }
>> >
>> >
>> > Surprisingly the output was
>> >
>> >
>> > FREE MEMORY -- 341854864  = 320 MB
>> > MAX MEMORY ---1908932608  = 1.9 GB
>> >
>> >
>> > I am just wondering what processes are taking up that extra 1.6GB of
>> > heap which I configured for the child jvm heap.
>> >
>> >
>> > Appreciate in helping me understand the scenario.
>> >
>> >
>> >
>> > Regards
>> >
>> > Nagarjuna K
>> >
>>
>>
>> --
>> Ted.
>>
>
--
Ted.