Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> hadoop security API (repost)


Copy link to this message
-
Re: hadoop security API (repost)
> You could do that, but that means your app will have to have keytabs
> for all the users want to act as. Proxyuser will be much easier to
> manage. Maybe getting proxyuser support in hbase if it is not there
> yet

I don't think proxy auth is what the OP is after. Do I have that
right? Implies the presence of a node somewhere to act as the proxy.
For HBase, there is https://issues.apache.org/jira/browse/HBASE-5050
which would enable proxyuser support via the REST gateway as simple
follow on work.

On Mon, Jul 2, 2012 at 9:21 AM, Alejandro Abdelnur <[EMAIL PROTECTED]> wrote:
> On Mon, Jul 2, 2012 at 9:15 AM, Tony Dean <[EMAIL PROTECTED]> wrote:
>> Alejandro,
>>
>> Thanks for the reply.  My intent is to also be able to scan/get/put hbase tables under a specified identity as well.  What options do I have to perform the same multi-tenant  authorization for these operations?  I have posted this to hbase users distribution list as well, but thought you might have insight.  Since hbase security authentication is so dependent upon hadoop, it would be nice if your suggestion worked for hbase as well.
>>
>> Getting back to your suggestion... when configuring "hadoop.proxyuser.myserveruser.hosts", host1 would be where I'm making the ugi.doAs() privileged call and host2 is the hadoop namenode?
>>
>
> host1 in that case.
>
>> Also, an another option, is there not a way for an application to pass hadoop/hbase authentication the name of a Kerberos principal to use?  In this case, no proxy, just execute as the designated user.
>
> You could do that, but that means your app will have to have keytabs
> for all the users want to act as. Proxyuser will be much easier to
> manage. Maybe getting proxyuser support in hbase if it is not there
> yet
>
>
>> Thanks.
>>
>> -Tony
>>
>> -----Original Message-----
>> From: Alejandro Abdelnur [mailto:[EMAIL PROTECTED]]
>> Sent: Monday, July 02, 2012 11:40 AM
>> To: [EMAIL PROTECTED]
>> Subject: Re: hadoop security API (repost)
>>
>> Tony,
>>
>> If you are doing a server app that interacts with the cluster on behalf of different users (like Ooize, as you mentioned in your email), then you should use the proxyuser capabilities of Hadoop.
>>
>> * Configure user MYSERVERUSER as proxyuser in Hadoop core-site.xml (this requires 2 properties settings, HOSTS and GROUPS).
>> * Run your server app as MYSERVERUSER and have a Kerberos principal MYSERVERUSER/MYSERVERHOST
>> * Initialize your server app loading the MYSERVERUSER/MYSERVERHOST keytab
>> * Use the UGI.doAs() to create JobClient/Filesystem instances using the user you want to do something on behalf
>> * Keep in mind that all the users you need to do something on behalf should be valid Unix users in the cluster
>> * If those users need direct access to the cluster, they'll have to be also defined in in the KDC user database.
>>
>> Hope this helps.
>>
>> Thx
>>
>> On Mon, Jul 2, 2012 at 6:22 AM, Tony Dean <[EMAIL PROTECTED]> wrote:
>>> Yes, but this will not work in a multi-tenant environment.  I need to be able to create a Kerberos TGT per execution thread.
>>>
>>> I was hoping through JAAS that I could inject the name of the current principal and authenticate against it.  I'm sure there is a best practice for hadoop/hbase client API authentication, just not sure what it is.
>>>
>>> Thank you for your comment.  The solution may well be associated with the UserGroupInformation class.  Hopefully, other ideas will come from this thread.
>>>
>>> Thanks.
>>>
>>> -Tony
>>>
>>> -----Original Message-----
>>> From: Ivan Frain [mailto:[EMAIL PROTECTED]]
>>> Sent: Monday, July 02, 2012 8:14 AM
>>> To: [EMAIL PROTECTED]
>>> Subject: Re: hadoop security API (repost)
>>>
>>> Hi Tony,
>>>
>>> I am currently working on this to access HDFS securely and programmaticaly.
>>> What I have found so far may help even if I am not 100% sure this is the right way to proceed.
>>>
>>> If you have already obtained a TGT from the kinit command, hadoop library will locate it "automatically" if the name of the ticket cache corresponds to default location. On Linux it is located /tmp/krb5cc_uid-number.

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet
Hein (via Tom White)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB