Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: Permission related errors when running with a different user


Copy link to this message
-
Re: Permission related errors when running with a different user
Hi Harsh,

 Thanks for the quick reply. I have tried this but it didn't work exactly.
Upon your suggestion I re-looked into the error and thought the following
might work

hadoop fs -chmod 777 /

I tried it and it worked.

  I went past that error. I now have a different error. I am seeing this
error in the Resource Manager's logs. Can you please throw me some clue on
this too..

2012-12-07 05:34:22,421 INFO  fifo.FifoScheduler
(FifoScheduler.java:containerCompleted(721)) - Application
appattempt_1353856203101_0369_000001 released container
container_1353856203101_0369_01_000003 on node: host: isredeng:51271
#containers=1 available=8064 used=128 with event: FINISHED
2012-12-07 05:34:24,401 WARN  attempt.RMAppAttemptImpl
(RMAppAttemptImpl.java:generateProxyUriWithoutScheme(379)) - Could not
proxify
java.net.URISyntaxException: Expected authority at index 7: http://
        at java.net.URI$Parser.fail(URI.java:2820)
        at java.net.URI$Parser.failExpecting(URI.java:2826)
        at java.net.URI$Parser.parseHierarchical(URI.java:3065)
        at java.net.URI$Parser.parse(URI.java:3025)
        at java.net.URI.<init>(URI.java:589)
        at
org.apache.hadoop.yarn.server.webproxy.ProxyUriUtils.getUriFromAMUrl(ProxyUriUtils.java:143)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.generateProxyUriWithoutScheme(RMAppAttemptImpl.java:371)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.access$2500(RMAppAttemptImpl.java:81)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMUnregisteredTransition.transition(RMAppAttemptImpl.java:849)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMUnregisteredTransition.transition(RMAppAttemptImpl.java:835)
        at
org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
        at
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
        at
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
        at
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:476)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:80)
        at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:414)
        at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:395)
        at
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:125)
        at
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:74)
        at java.lang.Thread.run(Thread.java:736)
Thanks,
Kishore

On Thu, Dec 6, 2012 at 8:58 PM, Harsh J <[EMAIL PROTECTED]> wrote:

> You are attempting a job submit operation as user "root" over HDFS and
> MR. Running a job involves placing the requisite files on HDFS, so MR
> can leverage its distributed presence to run task work.
>
> The files are usually placed under a user's HDFS home directory, which
> is of the form /user/[NAME]. By default, HDFS has no notions of a user
> existing in it (Imagine a linux user account with no home-dir yet). So
> you'll first have to provision the user with a home directory he can
> own for himself, as the HDFS administrator:
>
> Create and grant ownership to the user root, his own homedir:
>
> sudo -u hdfs hadoop fs -mkdir -p /user/root/
> sudo -u hdfs hadoop fs -chown root:root /user/root
>
> Once this is done, you can try to resubmit the job and the
> AccessControlException should be resolved.
>
> On Thu, Dec 6, 2012 at 8:28 PM, Krishna Kishore Bonagiri
> <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB