Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # dev >> Re: hadoop.job.ugi backwards compatibility


Copy link to this message
-
Re: hadoop.job.ugi backwards compatibility
On Mon, Sep 13, 2010 at 10:05 AM, Todd Lipcon <[EMAIL PROTECTED]> wrote:

> This is not MR-specific, since the strangely named hadoop.job.ugi determines
> HDFS permissions as well.

Yeah, after I hit send, I realized that I should have used common-dev.
This is really a dev issue.

> "or the user must write a custom group mapper" above refers to this plugin
> capability. But I think most users do not want to spend the time to write
> (or even setup) such a plugin beyond the default shell-based mapping
> service.

Sure, which is why it is easiest to just have the (hopefully disabled)
user accounts on the jt/nn. Any installs > 100 nodes should be using
HADOOP-6864 to avoid the fork in the JT/NN.

> As someone who spends an awful lot of time doing downstream support of lots
> of different clusters, I actually disagree.

Normal applications never need to do doAs. They run as the default
user. This only comes up in servers that deal with multiple users. In
*that* context, it sucks having servers that only work in non-secure
mode. If some server X only works without security that sucks. Doing
doAs isn't harder, it is just different. Having two different
semantics models *will* cause lots of grief.

-- Owen
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB