Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> rack awareness in hadoop


Copy link to this message
-
Re: rack awareness in hadoop
The problem is probably not related to the JVM memory so much as the Linux
memory manager.  The exception is in
java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
which would imply this is happening when trying to create a new process.
 The initial malloc for the new process space is being denied by the memory
manager.  There could be many reasons why this happens, though the most
likely is your overcommit settings and swap space.  I'd suggest reading
through these details:

https://www.kernel.org/doc/Documentation/vm/overcommit-accounting

On Sat, Apr 20, 2013 at 4:00 PM, Kishore Yellamraju <
[EMAIL PROTECTED]> wrote:

> All,
>
> I have posted this question to CDH ML ,  but i guess i can post it here
> because its a general hadoop question.
>
> When the NN or JT gets the rack info, i guess it stores the info in
> memory. can i ask you where in the JVM memory it will store the results (
> perm gen ?) ? .  I am getting "cannot allocate memory on NN and JT " and
> they have more than enough memory. when i looked at JVM usage stats i can
> see it doesnt have enough perm free space.so if its storing the values in
> perm gen  then there is a chance of this memory issues.
>
>
> Thanks in advance !!!
>
>
> exception that i see in logs :
>
> java.io.IOException: Cannot run program "/etc/hadoop/conf/topo.sh" (in
> directory "/usr/lib/hadoop-0.20-mapreduce"): java.io.IOException: error=12,
> Cannot allocate memory
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>         at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>         at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>         at
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>         at
> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>         at
> org.apache.hadoop.mapred.JobInProgress.createCache(JobInProgress.java:593)
>         at
> org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:765)
>         at
> org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:3775)
>         at
> org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:90)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:619)
> Caused by: java.io.IOException: java.io.IOException: error=12, Cannot
> allocate memory
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>         at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
>         ... 14 more
> 2013-04-20 02:07:28,298 ERROR org.apache.hadoop.mapred.JobTracker: Job
> initialization failed:
> java.lang.NullPointerException
>
>
> -Thanks
>  kishore kumar yellamraju |Ground control operations|
> [EMAIL PROTECTED] | 408.203.0424
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB