Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # dev - [DISCUSS] Hadoop SSO/Token Server Components

Copy link to this message
Re: [DISCUSS] Hadoop SSO/Token Server Components
Larry McCay 2013-07-27, 00:59
Hello All -

In an effort to scope an initial iteration that provides value to the
community while focusing on the pluggable authentication aspects, I've
written a description for "Iteration 1". It identifies the goal of the
iteration, the endstate and a set of initial usecases. It also enumerates
the components that are required for each usecase. There is a scope section
that details specific things that should be kept out of the first
iteration. This is certainly up for discussion. There may be some of these
things that can be contributed in short order. If we can add some things in
without unnecessary complexity for the identified usecases then we should.

@Alejandro - please review this and see whether it satisfies your point for
a definition of what we are building.

In addition to the document that I will paste here as text and attach a pdf
version, we have a couple patches for components that are identified in the
Specifically, COMP-7 and COMP-8.

I will be posting COMP-8 patch to the HADOOP-9534 JIRA which was filed
specifically for that functionality.
COMP-7 is a small set of classes to introduce JsonWebToken as the token
format and a basic JsonWebTokenAuthority that can issue and verify these

Since there is no JIRA for this yet, I will likely file a new JIRA for a
SSO token implementation.

Both of these patches assume to be modules within
While they are relatively small, I think that they will be pulled in by
other modules such as hadoop-auth which would likely not want a dependency
on something larger like hadoop-common/hadoop-common-project/hadoop-common.

This is certainly something that we should discuss within the community for
this effort though - that being, exactly how to add these libraries so that
they are most easily consumed by existing projects.

Anyway, the following is the Iteration-1 document - it is also attached as
a pdf:

Iteration 1: Pluggable User Authentication and Federation

The intent of this effort is to bootstrap the development of pluggable
token-based authentication mechanisms to support certain goals of
enterprise authentication integrations. By restricting the scope of this
effort, we hope to provide immediate benefit to the community while keeping
the initial contribution to a manageable size that can be easily reviewed,
understood and extended with further development through follow up JIRAs
and related iterations.

Iteration Endstate
Once complete, this effort will have extended the authentication mechanisms
- for all client types - from the existing: Simple, Kerberos and Plain (for
RPC) to include LDAP authentication and SAML based federation. In addition,
the ability to provide additional/custom authentication mechanisms will be
enabled for users to plug in their preferred mechanisms.

Project Scope
The scope of this effort is a subset of the features covered by the
overviews of HADOOP-9392 and HADOOP-9533. This effort concentrates on
enabling Hadoop to issue, accept/validate SSO tokens of its own. The
pluggable authentication mechanism within SASL/RPC layer and the
authentication filter pluggability for REST and UI components will be
leveraged and extended to support the results of this effort.

Out of Scope
In order to scope the initial deliverable as the minimally viable product,
a handful of things have been simplified or left out of scope for this
effort. This is not meant to say that these aspects are not useful or not
needed but that they are not necessary for this iteration. We do however
need to ensure that we don’t do anything to preclude adding them in future
1. Additional Attributes - the result of authentication will continue to
use the existing hadoop tokens and identity representations. Additional
attributes used for finer grained authorization decisions will be added
through follow-up efforts.
2. Token revocation - the ability to revoke issued identity tokens will be
added later
3. Multi-factor authentication - this will likely require additional
attributes and is not necessary for this iteration.
4. Authorization changes - we will require additional attributes for the
fine-grained access control plans. This is not needed for this iteration.
5. Domains - we assume a single flat domain for all users
6. Kinit alternative - we can leverage existing REST clients such as cURL
to retrieve tokens through authentication and federation for the time being
7. A specific authentication framework isn’t really necessary within the
REST endpoints for this iteration. If one is available then we can use it
otherwise we can leverage existing things like Apache Shiro within a
servlet filter.

In Scope
What is in scope for this effort is defined by the usecases described
below. Components required for supporting the usecases are summarized for
each client type. Each component is a candidate for a JIRA subtask - though
multiple components are likely to be included in a JIRA to represent a set
of functionality rather than individual JIRAs per component.

Terminology and Naming
The terms and names of components within this document are merely
descriptive of the functionality that they represent. Any similarity or
difference in names or terms from those that are found in other documents
are not intended to make any statement about those other documents or the
descriptions within. This document represents the pluggable authentication
mechanisms and server functionality required to replace Kerberos.

Ultimately, the naming of the implementation classes will be a product of
the patches accepted by the community.

client types: REST, CLI, UI
authentication types: Simple, Kerberos, authentication/LDAP, federation/SAML

Simple and Kerberos
Simple and Kerberos usecases continue to work as they do today. The
addition of Authentication/LDAP and Federation/SAML are added through the
existing pluggability points either as they are or with required extension.
Either way,