Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # dev >> [DISCUSS] Hadoop SSO/Token Server Components


Copy link to this message
-
Re: [DISCUSS] Hadoop SSO/Token Server Components
It seems to me that we can have the best of both worlds here…it's all about the scoping.

If we were to reframe the immediate scope to the lowest common denominator of what is needed for accepting tokens in authentication plugins then we gain:

1. a very manageable scope to define and agree upon
2. a deliverable that should be useful in and of itself
3. a foundation for community collaboration that we build on for higher level solutions built on this lowest common denominator and experience as a working community

So, to Alejandro's point, perhaps we need to define what would make #2 above true - this could serve as the "what" we are building instead of the "how" to build it.
Including:
a. project structure within hadoop-common-project/common-security or the like
b. the usecases that would need to be enabled to make it a self contained and useful contribution - without higher level solutions
c. the JIRA/s for contributing patches
d. what specific patches will be needed to accomplished the usecases in #b

In other words, an end-state for the lowest common denominator that enables code patches in the near-term is the best of both worlds.

I think this may be a good way to bootstrap the collaboration process for our emerging security community rather than trying to tackle a huge vision all at once.

@Alejandro - if you have something else in mind that would bootstrap this process - that would great - please advise.

thoughts?

On Jul 10, 2013, at 1:06 PM, Brian Swan <[EMAIL PROTECTED]> wrote:

> Hi Alejandro, all-
>
> There seems to be agreement on the broad stroke description of the components needed to achieve pluggable token authentication (I'm sure I'll be corrected if that isn't the case). However, discussion of the details of those components doesn't seem to be moving forward. I think this is because the details are really best understood through code. I also see *a* (i.e. one of many possible) token format and pluggable authentication mechanisms within the RPC layer as components that can have immediate benefit to Hadoop users AND still allow flexibility in the larger design. So, I think the best way to move the conversation of "what we are aiming for" forward is to start looking at code for these components. I am especially interested in moving forward with pluggable authentication mechanisms within the RPC layer and would love to see what others have done in this area (if anything).
>
> Thanks.
>
> -Brian
>
> -----Original Message-----
> From: Alejandro Abdelnur [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, July 10, 2013 8:15 AM
> To: Larry McCay
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; Kai Zheng
> Subject: Re: [DISCUSS] Hadoop SSO/Token Server Components
>
> Larry, all,
>
> Still is not clear to me what is the end state we are aiming for, or that we even agree on that.
>
> IMO, Instead trying to agree what to do, we should first  agree on the final state, then we see what should be changed to there there, then we see how we change things to get there.
>
> The different documents out there focus more on how.
>
> We not try to say how before we know what.
>
> Thx.
>
>
>
>
> On Wed, Jul 10, 2013 at 6:42 AM, Larry McCay <[EMAIL PROTECTED]> wrote:
>
>> All -
>>
>> After combing through this thread - as well as the summit session
>> summary thread, I think that we have the following two items that we
>> can probably move forward with:
>>
>> 1. TokenAuth method - assuming this means the pluggable authentication
>> mechanisms within the RPC layer (2 votes: Kai and Kyle) 2. An actual
>> Hadoop Token format (2 votes: Brian and myself)
>>
>> I propose that we attack both of these aspects as one. Let's provide
>> the structure and interfaces of the pluggable framework for use in the
>> RPC layer through leveraging Daryn's pluggability work and POC it with
>> a particular token format (not necessarily the only format ever
>> supported - we just need one to start). If there has already been work
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB