Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - Running from a client machine does not work under 1.03


+
Steve Lewis 2012-12-07, 18:19
+
Harsh J 2012-12-07, 22:52
+
Steve Lewis 2012-12-08, 02:17
Copy link to this message
-
Re: Running from a client machine does not work under 1.03
Harsh J 2012-12-08, 02:35
It would help if I had the whole stacktrace.

Note that submitting a job involves write to two filesystems: Client's
local FS and JobTracker's distributed FS (HDFS). A Client will need
access to the hadoop.tmp.dir pointed path at its own end to assemble a
submittable jar to HDFS, onto JT. Your permission issue may easily be
here, which a stacktrace would easily help confirm.

On Sat, Dec 8, 2012 at 7:47 AM, Steve Lewis <[EMAIL PROTECTED]> wrote:
> The immediate problem is  an access exception - users like Asterix\Steve -
> completely unknown to the  file system cannot write files or directories.
>
> There is another error "Insufficient memory to start the java runtime" but
> we do not get far and a chunk of this effort has been to create a small
> sample to post.
>
> The real issue is that from an external machine hadoop runs as the local
> user on the client machine - in 0.2 with security turned down that worked
> although the listed file ownership in hdfs was a little strange. In 1.03 it
> does not work because Hadoop/hdfa will not let these foreign users own
> files.
>
> Steven M. Lewis PhD
> 4221 105th Ave NE
> Kirkland, WA 98033
> cell 206-384-1340
> skype lordjoe_com
>
> Can you share your specific error/exception please?
>
> On Fri, Dec 7, 2012 at 11:49 PM, Steve Lewis <[EMAIL PROTECTED]> wrote:
>> I have been running Hadoop jobs from my local box - on the net but outside
>> the cluster.
>>
>>       Configuration conf = new Configuration();
>>      String jarfile = "somelocalfile.jar";
>>         conf.set("mapred.jar", jarFile);
>>
>> hdsf-site.xml has
>> <property>
>>    <name>dfs.permissions</name>
>>    <value>false</value>
>>    <final>true</final>
>> </property>
>>
>> and all policies in hadoop-policy.xml are *
>>
>> when I run the job on my local machine it executes properly on a hadoop
>> 0.2
>> cluster. All directories in hdfs are owned by the local user - something
>> like Asterix\Steve but hdfs does not seen to care and jobs run well.
>>
>> I have a colleague with a Hadoop 1.03 cluster and setting the config to
>> point at the cluster's file system, jobtracker and passing in a local jar
>> gives permission errors.
>>
>> I read that security has changed in 1.03. My question is was this EVER
>> supposed to work? If it used to work then why does it not work now?
>> (security?) Is there a way to change the hadoop cluster so it works under
>> 1.03 or (preferable) to supply a username and password and ask the cluster
>> to execute under that user from a client system rather than opening an ssh
>> channel to the cluster?
>>
>>
>>         String hdfshost = "hdfs://MyCluster:9000";
>>         conf.set("fs.default.name", hdfshost);
>>         String jobTracker = "MyCluster:9001";
>>         conf.set("mapred.job.tracker", jobTracker);
>>
>> On the cluster in hdfs
>>
>> --
>> Steven M. Lewis PhD
>> 4221 105th Ave NE
>> Kirkland, WA 98033
>> 206-384-1340 (cell)
>> Skype lordjoe_com
>>
>>
>
>
>
> --
> Harsh J

--
Harsh J