I have been running Hadoop jobs from my local box - on the net but outside
Configuration conf = new Configuration();
String jarfile = "somelocalfile.jar";
and all policies in hadoop-policy.xml are *
when I run the job on my local machine it executes properly on a hadoop 0.2
cluster. All directories in hdfs are owned by the local user - something
like Asterix\Steve but hdfs does not seen to care and jobs run well.
I have a colleague with a Hadoop 1.03 cluster and setting the config to
point at the cluster's file system, jobtracker and passing in a local jar
gives permission errors.
I read that security has changed in 1.03. My question is was this EVER
supposed to work? If it used to work then why does it not work now?
(security?) Is there a way to change the hadoop cluster so it works under
1.03 or (preferable) to supply a username and password and ask the cluster
to execute under that user from a client system rather than opening an ssh
channel to the cluster?
String hdfshost = "hdfs://MyCluster:9000";
String jobTracker = "MyCluster:9001";
On the cluster in hdfs
Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
Harsh J 2012-12-07, 22:52
Steve Lewis 2012-12-08, 02:17
Harsh J 2012-12-08, 02:35