I know there is a lot of discussion about JVM reuse in Hadoop, but that usually refers to mappers running on the cluste nodesr. I have a much different question. I am running a Java program which at one point execs hadoop and that call sometimes fails in the fashion shown below. Thus, this issue occurs entirely within the client machine (of course, I am currently running in pseudo-distributed mode which convolutes that point somewhat). In other words, I successfully ran a Java program, but it failed to subsequently run *another* Java program (hadoop). My interpretation of the hadoop startup scripts (the hadoop command itself for example) is that they run a second JVM in my scenario, and that they fail to allocate enough memory.
Is there any way to run hadoop from within a JVM such that it reuses the local JVM?
EXCEPTION: java.io.IOException: Cannot run program "hadoop": java.io.IOException: error=12, Cannot allocate memory
Exception in thread "main" java.io.IOException: Cannot run program "hadoop": java.io.IOException: error=12, Cannot allocate memory
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
... 6 more
Keith Wiley [EMAIL PROTECTED] keithwiley.com music.keithwiley.com
"You can scratch an itch, but you can't itch a scratch. Furthermore, an itch can
itch but a scratch can't scratch. Finally, a scratch can itch, but an itch can't
scratch. All together this implies: He scratched the itch from the scratch that
itched but would never itch the scratch from the itch that scratched."
-- Keith Wiley