Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Issues during setting up hadoop security cluster


Copy link to this message
-
Re: Issues during setting up hadoop security cluster
Vinod Kumar Vavilapalli 2012-01-20, 19:28
You are on the right path for sure.

Where are you updating the JCE policy jar? (I know the RM-NM case is
working after this, so just checking)

May be the datanodes are not using the same JRE that you updated with
the new policy jar? Can you check that? jsvc shouldn't cause any more
issues, it should be related to your JAVA_HOME in case of datanode.

Thanks,
+Vinod

On Fri, Jan 20, 2012 at 2:33 AM, Emma Lin <[EMAIL PROTECTED]> wrote:
> After remove the upper-case, the problem disappeared. Now I get node manager connected to resource manager successfully.
> Thank you Vinod.
>
> But now, I get another issue to connect Name Node from Data Node. The log in Name Node is as following:
> 2012-01-20 18:17:02,127 WARN  ipc.Server (Server.java:saslReadAndProcess(1070)) - Auth failed for 10.112.127.14:60456:null
> 2012-01-20 18:17:02,128 INFO  ipc.Server (Server.java:doRead(572)) - IPC Server listener on 9000: readAndProcess threw exception javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled)] from client 10.112.127.14. Count of bytes read: 0
> javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled)]
>        at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:159)
>        at org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:1054)
>        at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1232)
>        at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:567)
>        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:366)
>        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:341)
> Caused by: GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled)
>        at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:741)
>        at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:323)
>        at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:267)
>        at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:137)
>        ... 5 more
> Caused by: KrbException: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled
>        at sun.security.krb5.EncryptionKey.findKey(EncryptionKey.java:481)
>        at sun.security.krb5.KrbApReq.authenticate(KrbApReq.java:260)
>        at sun.security.krb5.KrbApReq.<init>(KrbApReq.java:134)
>        at sun.security.jgss.krb5.InitSecContextToken.<init>(InitSecContextToken.java:79)
>        at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:724)
>        ... 8 more
>
> From the internet, someone said that it's because the Java support AES 128 by default. And to support AES 256, we need to install unlimited JCE policy. But after install the JCE, node manager can connect to resource manager, the data node still cannot connect to name node.
> As the datanode is started through jsvc, I don't know if the java setting does not work after executed through jsvc. But anyway, it still complain for the AES 256 is not supported.
>
> Any ideas?
> Thanks
> Emma
>
>
> -----Original Message-----
> From: Vinod Kumar Vavilapalli [mailto:[EMAIL PROTECTED]]
> Sent: 2012年1月20日 13:23
> To: [EMAIL PROTECTED]
> Subject: Re: Issues during setting up hadoop security cluster
>
> Hi,
>
> Just today evening, I happened to run into someone who had the same
> issue. After some debugging, I cornered that to the hostnames having
> upper-case characters. Somehow, when DataNode or NodeManager try to
> get a service ticket for their corresponding services (NameNode and
> ResourceManager respectively), the hostname were getting converted