Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> jobtracker is stopping because of permissions


+
Mohit Vadhera 2013-04-19, 06:12
Copy link to this message
-
Re: jobtracker is stopping because of permissions
/mnt/san1 - owned by aye, hadmin and user mapred is trying to write to this
directory. Can you look at your core-, hdfs- and mapred-site.xml to see
where /mnt/san1 is configured as a value - that might make it more clear
what needs to be changed.

I suspect this could be one of the system directories that the JobTracker
has to manage on HDFS to run jobs.

Thanks
Hemanth
On Fri, Apr 19, 2013 at 11:42 AM, Mohit Vadhera <
[EMAIL PROTECTED]> wrote:

> Can anybody help me to start jobtracker service. it is an urgent . it
> looks permission issue .
> What permission to give on which directory. I am pasting log for the same.
> Service start and stops
>
> 2013-04-19 02:21:06,388 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/mnt/san1":aye:hadmin:drwxr
> -xr-x
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>
> Thanks,
>