Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: Child JVM memory allocation / Usage


+
Harsh J 2013-03-25, 04:56
+
nagarjuna kanamarlapudi 2013-03-25, 05:02
+
Hemanth Yamijala 2013-03-25, 06:31
+
nagarjuna kanamarlapudi 2013-03-25, 08:44
Copy link to this message
-
Re: Child JVM memory allocation / Usage
Hmm. How are you loading the file into memory ? Is it some sort of memory
mapping etc ? Are they being read as records ? Some details of the app will
help
On Mon, Mar 25, 2013 at 2:14 PM, nagarjuna kanamarlapudi <
[EMAIL PROTECTED]> wrote:

> Hi Hemanth,
>
> I tried out your suggestion loading 420 MB file into memory. It threw java
> heap space error.
>
> I am not sure where this 1.6 GB of configured heap went to ?
>
>
> On Mon, Mar 25, 2013 at 12:01 PM, Hemanth Yamijala <
> [EMAIL PROTECTED]> wrote:
>
>> Hi,
>>
>> The free memory might be low, just because GC hasn't reclaimed what it
>> can. Can you just try reading in the data you want to read and see if that
>> works ?
>>
>> Thanks
>> Hemanth
>>
>>
>> On Mon, Mar 25, 2013 at 10:32 AM, nagarjuna kanamarlapudi <
>> [EMAIL PROTECTED]> wrote:
>>
>>> io.sort.mb = 256 MB
>>>
>>>
>>> On Monday, March 25, 2013, Harsh J wrote:
>>>
>>>> The MapTask may consume some memory of its own as well. What is your
>>>> io.sort.mb (MR1) or mapreduce.task.io.sort.mb (MR2) set to?
>>>>
>>>> On Sun, Mar 24, 2013 at 3:40 PM, nagarjuna kanamarlapudi
>>>> <[EMAIL PROTECTED]> wrote:
>>>> > Hi,
>>>> >
>>>> > I configured  my child jvm heap to 2 GB. So, I thought I could really
>>>> read
>>>> > 1.5GB of data and store it in memory (mapper/reducer).
>>>> >
>>>> > I wanted to confirm the same and wrote the following piece of code in
>>>> the
>>>> > configure method of mapper.
>>>> >
>>>> > @Override
>>>> >
>>>> > public void configure(JobConf job) {
>>>> >
>>>> > System.out.println("FREE MEMORY -- "
>>>> >
>>>> > + Runtime.getRuntime().freeMemory());
>>>> >
>>>> > System.out.println("MAX MEMORY ---" +
>>>> Runtime.getRuntime().maxMemory());
>>>> >
>>>> > }
>>>> >
>>>> >
>>>> > Surprisingly the output was
>>>> >
>>>> >
>>>> > FREE MEMORY -- 341854864  = 320 MB
>>>> > MAX MEMORY ---1908932608  = 1.9 GB
>>>> >
>>>> >
>>>> > I am just wondering what processes are taking up that extra 1.6GB of
>>>> heap
>>>> > which I configured for the child jvm heap.
>>>> >
>>>> >
>>>> > Appreciate in helping me understand the scenario.
>>>> >
>>>> >
>>>> >
>>>> > Regards
>>>> >
>>>> > Nagarjuna K
>>>> >
>>>> >
>>>> >
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>>
>>> --
>>> Sent from iPhone
>>>
>>
>>
>
+
Ted 2013-03-25, 01:27
+
nagarjuna kanamarlapudi 2013-03-25, 01:39
+
Ted 2013-03-25, 03:27
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB