My Java mappers use JNI to call native .so files compiled from C++. In some cases, the task status ends with exit 139, which generally indicates a seg-fault. I would like to see the core-dump, but I can't seem to get it to work.
I have this line in my driver setup (Yes, it's v.19):
and I have this in my .bashrc (which I believe should be propagated to the slave nodes):
ulimit -c unlimited
and in my native code I call rlimit() and write the results, where I see:
RLIMIT_CORE: 18446744073709551615 18446744073709551615
which indicates the "unlimited" setting, but I can't find any core dump files in the node's hadoop directories after the job runs.
Any ideas what I'm doing wrong?
Keith Wiley [EMAIL PROTECTED] www.keithwiley.com
"It's a fine line between meticulous and obsessive-compulsive and a slippery
rope between obsessive-compulsive and debilitatingly slow."
-- Keith Wiley