Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Child JVM memory allocation / Usage


Copy link to this message
-
Re: Child JVM memory allocation / Usage
I have a lookup file which I need in the mapper. So I am trying to read the whole file and load it into list in the mapper. 
For each and every record Iook in this file which I got from distributed cache. 

Sent from  iPhone

On Mon, Mar 25, 2013 at 6:39 PM, Hemanth Yamijala
<[EMAIL PROTECTED]> wrote:

> Hmm. How are you loading the file into memory ? Is it some sort of memory
> mapping etc ? Are they being read as records ? Some details of the app will
> help
> On Mon, Mar 25, 2013 at 2:14 PM, nagarjuna kanamarlapudi <
> [EMAIL PROTECTED]> wrote:
>> Hi Hemanth,
>>
>> I tried out your suggestion loading 420 MB file into memory. It threw java
>> heap space error.
>>
>> I am not sure where this 1.6 GB of configured heap went to ?
>>
>>
>> On Mon, Mar 25, 2013 at 12:01 PM, Hemanth Yamijala <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Hi,
>>>
>>> The free memory might be low, just because GC hasn't reclaimed what it
>>> can. Can you just try reading in the data you want to read and see if that
>>> works ?
>>>
>>> Thanks
>>> Hemanth
>>>
>>>
>>> On Mon, Mar 25, 2013 at 10:32 AM, nagarjuna kanamarlapudi <
>>> [EMAIL PROTECTED]> wrote:
>>>
>>>> io.sort.mb = 256 MB
>>>>
>>>>
>>>> On Monday, March 25, 2013, Harsh J wrote:
>>>>
>>>>> The MapTask may consume some memory of its own as well. What is your
>>>>> io.sort.mb (MR1) or mapreduce.task.io.sort.mb (MR2) set to?
>>>>>
>>>>> On Sun, Mar 24, 2013 at 3:40 PM, nagarjuna kanamarlapudi
>>>>> <[EMAIL PROTECTED]> wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I configured  my child jvm heap to 2 GB. So, I thought I could really
>>>>> read
>>>>> > 1.5GB of data and store it in memory (mapper/reducer).
>>>>> >
>>>>> > I wanted to confirm the same and wrote the following piece of code in
>>>>> the
>>>>> > configure method of mapper.
>>>>> >
>>>>> > @Override
>>>>> >
>>>>> > public void configure(JobConf job) {
>>>>> >
>>>>> > System.out.println("FREE MEMORY -- "
>>>>> >
>>>>> > + Runtime.getRuntime().freeMemory());
>>>>> >
>>>>> > System.out.println("MAX MEMORY ---" +
>>>>> Runtime.getRuntime().maxMemory());
>>>>> >
>>>>> > }
>>>>> >
>>>>> >
>>>>> > Surprisingly the output was
>>>>> >
>>>>> >
>>>>> > FREE MEMORY -- 341854864  = 320 MB
>>>>> > MAX MEMORY ---1908932608  = 1.9 GB
>>>>> >
>>>>> >
>>>>> > I am just wondering what processes are taking up that extra 1.6GB of
>>>>> heap
>>>>> > which I configured for the child jvm heap.
>>>>> >
>>>>> >
>>>>> > Appreciate in helping me understand the scenario.
>>>>> >
>>>>> >
>>>>> >
>>>>> > Regards
>>>>> >
>>>>> > Nagarjuna K
>>>>> >
>>>>> >
>>>>> >
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>>
>>>>
>>>>
>>>> --
>>>> Sent from iPhone
>>>>
>>>
>>>
>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB