Unfortunately I do not know of a way to do that without writing wrapper
code. I do not think it is possible with the secure implementation of
MR/HDFS, regardless of security being turned on/off.
Can your client machine not have a user named as the one that is allowed to
do things on HDFS, if thats how you're architecting your usage? Then users
may do "sudo -u <user>", given sudo grants for that, and create files via
sudo -u user hadoop fs -foo bar commands?
On Wed, Jul 18, 2012 at 11:05 PM, Corbett Martin <[EMAIL PROTECTED]> wrote:
> Thanks for the quick response.
> I came across Secure Impersonation earlier today but it didn't seem to do
> what I'm looking for.
> Correct me if I'm wrong but Secure Impersonation would require writing
> code to operate on HDFS (mkdir, rm…etc), that code would then need to be
> executed from a client? I suppose this would do the trick but I was hoping
> we could just issue hadoop fs commands against our cluster directly from a
> remote client yet override the username thats being sent to the cluster.
> On Jul 18, 2012, at 11:54 AM, Harsh J wrote:
> > Hey Corbett,
> > We prevent overriding user.name. We instead provide secure
> > impersonation (does not require kerberos, don't be fooled by its
> > name), which is documented at
> > http://hadoop.apache.org/common/docs/stable/Secure_Impersonation.html.
> > This should let you do what you're attempting to, in a more controlled
> > fashion.
> > On Wed, Jul 18, 2012 at 10:22 PM, Corbett Martin <[EMAIL PROTECTED]>
> >> Hello
> >> I'm new to Hadoop and I'm trying to do something I *think* should be
> easy but having some trouble. Here's the details.
> >> 1. I'm running Hadoop version 1.0.2
> >> 2. I have a 2 Node Hadoop Cluster up and running, with no security
> >> I'm having trouble overriding the username from the client so that the
> files/directories created are owned by the user I specify from the client.
> >> For example I'm trying to run:
> >> hadoop fs -Duser.name=someUserName -conf hadoop-cluster.xml
> -mkdir /user/someOtherUserName/test
> >> And have the directory "test" created in hdfs and owned by
> "someUserName". Instead it is creating the directory and giving it the
> owner of the user (whoami) from the client. I'd like to override or
> control that…can someone tell me how?
> >> My hadoop-cluster.xml file on the client looks like this:
> >> <?xml version="1.0"?>
> >> <configuration>
> >> <property>
> >> <name>fs.default.name</name>
> >> <value>hdfs://server1:54310</value>
> >> </property>
> >> <property>
> >> <name>mapred.job.tracker</name>
> >> <value>server1:54311</value>
> >> </property>
> >> </configuration>
> >> Thanks for the help
> >> This message and its contents (to include attachments) are the property
> of National Health Systems, Inc. and may contain confidential and
> proprietary information. This email and any files transmitted with it are
> intended solely for the use of the individual or entity to whom they are
> addressed. You are hereby notified that any unauthorized disclosure,
> copying, or distribution of this message, or the taking of any unauthorized
> action based on information contained herein is strictly prohibited.
> Unauthorized use of information contained herein may subject you to civil
> and criminal prosecution and penalties. If you are not the intended
> recipient, you should delete this message immediately and notify the sender
> immediately by telephone or by replying to this transmission.
> > --
> > Harsh J
> This message and its contents (to include attachments) are the property of
> National Health Systems, Inc. and may contain confidential and proprietary
> information. This email and any files transmitted with it are intended
> solely for the use of the individual or entity to whom they are addressed.
> You are hereby notified that any unauthorized disclosure, copying, or