Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Java->native .so->seg fault->core dump file?


Copy link to this message
-
Re: Java->native .so->seg fault->core dump file?
On Jan 28, 2011, at 09:39 , Allen Wittenauer wrote:

>
> On Jan 21, 2011, at 12:57 PM, Keith Wiley wrote:
>> and I have this in my .bashrc (which I believe should be propagated to the slave nodes):
>> ulimit -c unlimited
>
> .bashrc likely isn't executed at task startup, btw.  Also, you would need to have this in whatever account is used to run the tasktracker...

True...good point.

>> and in my native code I call rlimit() and write the results, where I see:
>> RLIMIT_CORE:  18446744073709551615     18446744073709551615
>>
>> which indicates the "unlimited" setting, but I can't find any core dump files in the node's hadoop directories after the job runs.
>>
>> Any ideas what I'm doing wrong?
>
> Which operating system?  On Linux, what is the value of /proc/sys/kernel/core_pattern ? On Solaris, what is in /etc/coreadm.conf ?

Linux.  Are you asking the value on the cluster or on my local machine?  The value of "/proc/sys/kernel/core_pattern" on the namenode (I guess) is "core".

Thanks.

________________________________________________________________________________
Keith Wiley               [EMAIL PROTECTED]               www.keithwiley.com

"And what if we picked the wrong religion?  Every week, we're just making God
madder and madder!"
  -- Homer Simpson
________________________________________________________________________________