Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Java->native .so->seg fault->core dump file?


Copy link to this message
-
Re: Java->native .so->seg fault->core dump file?
Keith Wiley 2011-01-24, 18:05

On Jan 21, 2011, at 16:47 , Greg Roelofs wrote:

>> My Java mappers use JNI to call native .so files compiled
>> from C++.  In some cases, the task status ends with exit 139,
>> which generally indicates a seg-fault.  I would like to see the
>> core-dump, but I can't seem to get it to work.
>
> No clue about 0.19, but does the owner of the process(es) in question
> have permission to write to the directory in question?  We've seen a
> similar issue in which root or ops or somebody owns the HADOOP_HOME
> dir (which, IIRC, is where many of the processes get started), so
> neither mapred nor hdfs has permission to write anything there.
Hmmm, I wouldn't have expected task core dumps to have any dependence on HADOOP_HOME.  I believe HADOOP_HOME is primarily a accessed by the driver while the tasks primarily use hadoop.tmp.dir, dfs.name.dir, and dfs.data.dir.

Anyone have any thoughts on this?

________________________________________________________________________________
Keith Wiley               [EMAIL PROTECTED]               www.keithwiley.com

"I do not feel obliged to believe that the same God who has endowed us with
sense, reason, and intellect has intended us to forgo their use."
  -- Galileo Galilei
________________________________________________________________________________