Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Java->native .so->seg fault->core dump file?


Copy link to this message
-
Java->native .so->seg fault->core dump file?
My Java mappers use JNI to call native .so files compiled from C++.  In some cases, the task status ends with exit 139, which generally indicates a seg-fault.  I would like to see the core-dump, but I can't seem to get it to work.

I have this line in my driver setup (Yes, it's v.19):
conf.setBoolean("keep.failed.task.files", true);

and I have this in my .bashrc (which I believe should be propagated to the slave nodes):
ulimit -c unlimited

and in my native code I call rlimit() and write the results, where I see:
RLIMIT_CORE:  18446744073709551615     18446744073709551615

which indicates the "unlimited" setting, but I can't find any core dump files in the node's hadoop directories after the job runs.

Any ideas what I'm doing wrong?

________________________________________________________________________________
Keith Wiley               [EMAIL PROTECTED]               www.keithwiley.com

"It's a fine line between meticulous and obsessive-compulsive and a slippery
rope between obsessive-compulsive and debilitatingly slow."
  -- Keith Wiley
________________________________________________________________________________
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB