Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Hadoop 1.0.3 (nutch-1.5.1) throwing errors on AIX 6.1


Copy link to this message
-
Re: Hadoop 1.0.3 (nutch-1.5.1) throwing errors on AIX 6.1
Steve Loughran 2012-08-27, 01:49
On 24 August 2012 06:49, James F Walton <[EMAIL PROTECTED]> wrote:

> Good point.  I did some reading over there and it looks like even the IBM
> packaging of Hadoop (BigInsights) is geared strictly towards Linux.  Both
> the Enterprise and Basic editions only list support for Red Hat or SuSE
> Enterprise Linux.
>
> So, thanks for those that chimed in.  Off to platform migration planning I
> go.
>
I tried running Hadoop on JRockit a few years back; after being the only
person to file JIRAs related to that JVM I reverted to the sun JDK -that's
the only one that it is tested at scale against before Apache releases.

One thing you could do is try and persuade the IBM Power JVM team to start
using Hadoop as part of their JVM qualification process -then get linked in
with the Jenkins-based build & test process, so that regressions get picked
up sooner rather than later. There's no fundamental reason why Hadoop won't
work on other platforms.

>
>
>
>
> From:        Mike Spreitzer/Watson/IBM@IBMUS
> To:        [EMAIL PROTECTED]
> Date:        08/24/2012 09:28 AM
> Subject:        Re: Hadoop 1.0.3 (nutch-1.5.1) throwing errors on AIX 6.1
> ------------------------------
>
>
>
> While I am not involved with it, I am aware that IBM has a Hadoop
> distribution of its own; I suspect you can expect better coverage from it
> than from the base distribution.  Here is a pointer: *
> http://www-01.ibm.com/software/data/infosphere/biginsights/*<http://www-01.ibm.com/software/data/infosphere/biginsights/>
>
> Regards,
> Mike
>
>
>
> From:        James F Walton/Southbury/IBM@IBMUS
> To:        [EMAIL PROTECTED]
> Date:        08/24/2012 09:18 AM
> Subject:        Re: Hadoop 1.0.3 (nutch-1.5.1) throwing errors on AIX 6.1
>  ------------------------------
>
>
>
> I'm not entirely sure it's fair to say it's a bug in the IBM JVM.  It's a
> current implementation difference.  They are still using platform-specific
> authentication modules for Windows, AIX, and Linux.  Even Sun/Oracle Java
> has a specific SolarisLoginModule, which is deprecated but still available.
>
> Essentially, depending on what OS/architecture you are on, one of the
> following will exist:
> com.ibm.security.auth.module.NTLoginModule
> com.ibm.security.auth.module.LinuxLoginModule
> com.ibm.security.auth.module.AIXLoginModule
> com.ibm.security.auth.module.AIX64LoginModule
>
> Basically, instead of two possible outcomes, there are four.
>
>
>
> From:        Steve Loughran <[EMAIL PROTECTED]>
> To:        [EMAIL PROTECTED]
> Date:        08/22/2012 03:09 PM
> Subject:        Re: Hadoop 1.0.3 (nutch-1.5.1) throwing errors on AIX 6.1
>  ------------------------------
>
>
>
> This is something you ought to raise with the IBM JVM team, as it does
> appear to be a bug in their JVM.
>
> On 21 August 2012 10:11, James F Walton <*[EMAIL PROTECTED]*<[EMAIL PROTECTED]>>
> wrote:
> Found part of the reason while digging through UserGroupInformation.java
>
> /* Return the OS login module class name */
> private static String getOSLoginModuleName() {
>   if (System.getProperty("java.vendor").contains("IBM")) {
>     return windows ? "com.ibm.security.auth.module.NTLoginModule"
>      : "com.ibm.security.auth.module.LinuxLoginModule";
>   } else {
>     return windows ? "com.sun.security.auth.module.NTLoginModule"
>       : "com.sun.security.auth.module.UnixLoginModule";
>   }
> }
>
>
> So basically, if you use IBM java, then you must be on either Windows or
> Linux.  IBM's java appears to have platform specific LoginModules, there's
> AIXLoginModule for 32-bit java on AIX, and AIX64LoginModule for 64-bit java
> on AIX; however, the IBM Linux module appears not to have any 32-bit vs
> 64-bit differentiation.
>
> So, unless anyone has a means to disable this whole security setup (I'm
> not using a hadoop cluster or anything), or wants to dive headlong into
> making the necessary code changes (which I presume from my cursory scanning
> would include a little more than just the above snippet, like the