Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - Running from a client machine does not work under 1.03


Copy link to this message
-
Running from a client machine does not work under 1.03
Steve Lewis 2012-12-07, 18:19
I have been running Hadoop jobs from my local box - on the net but outside
the cluster.

      Configuration conf = new Configuration();
     String jarfile = "somelocalfile.jar";
        conf.set("mapred.jar", jarFile);

hdsf-site.xml has
<property>
   <name>dfs.permissions</name>
   <value>false</value>
   <final>true</final>
</property>

and all policies in hadoop-policy.xml are *

when I run the job on my local machine it executes properly on a hadoop 0.2
cluster. All directories in hdfs are owned by the local user - something
like Asterix\Steve but hdfs does not seen to care and jobs run well.

I have a colleague with a Hadoop 1.03 cluster and setting the config to
point at the cluster's file system, jobtracker and passing in a local jar
gives permission errors.

I read that security has changed in 1.03. My question is was this EVER
supposed to work? If it used to work then why does it not work now?
(security?) Is there a way to change the hadoop cluster so it works under
1.03 or (preferable) to supply a username and password and ask the cluster
to execute under that user from a client system rather than opening an ssh
channel to the cluster?
        String hdfshost = "hdfs://MyCluster:9000";
        conf.set("fs.default.name", hdfshost);
        String jobTracker = "MyCluster:9001";
        conf.set("mapred.job.tracker", jobTracker);

On the cluster in hdfs

--
Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com
+
Harsh J 2012-12-07, 22:52
+
Steve Lewis 2012-12-08, 02:17
+
Harsh J 2012-12-08, 02:35