-Re: Prevent users from killing each other's jobs
Vinod Kumar Vavilapalli 2013-07-30, 18:51
That is correct. Seems like something else is happening.
One thing to see if all your users or more importantly their group is added to the cluster-admin acl (mapreduce.cluster.administrators)
You should look at mapreduce audit logs (which by default go into JobTracker logs, search for Audit). It clearly logs which user is killing a job
On Jul 30, 2013, at 11:31 AM, Murat Odabasi wrote:
> I'm not sure how I should do that.
> The documentation says "A job submitter can specify access control
> lists for viewing or modifying a job via the configuration properties
> mapreduce.job.acl-view-job and mapreduce.job.acl-modify-job
> respectively. By default, nobody is given access in these properties."
> My understanding is no other user should be able to modify a job
> unless explicitly authorized. Is that not the case? Should I set these
> two properties before running the job?
> On 30 July 2013 19:25, Vinod Kumar Vavilapalli <[EMAIL PROTECTED]> wrote:
>> You need to set up Job ACLs. See
>> It is a per job configuration, you can provide with defaults. If the job
>> owner wishes to give others access, he/she can do so.
>> +Vinod Kumar Vavilapalli
>> Hortonworks Inc.
>> On Jul 30, 2013, at 11:21 AM, Murat Odabasi wrote:
>> Hi there,
>> I am trying to introduce some sort of security to prevent different
>> people using the cluster from interfering with each other's jobs.
>> Following the instructions at
>> http://hadoop.apache.org/docs/stable/cluster_setup.html and
>> , this is what I put in my mapred-site.xml:
>> I can see the configuration parameters in the job configuration when I
>> run a hive query, but the users are still able to kill each other's
>> Any ideas about what I may be missing?
>> Any alternative approaches I can adopt?