Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> RE: Permission problem


Copy link to this message
-
Re: Permission problem
Sorry Kevin, I was away for a while. Are you good now?

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <[EMAIL PROTECTED]> wrote:

> Kevin
>
> You will have create a new account if you did not have one before.
>
> --
> Arpit
>
> On Apr 30, 2013, at 9:11 AM, Kevin Burton <[EMAIL PROTECTED]>
> wrote:
>
> I don’t see a “create issue” button or tab. If I need to log in then I am
> not sure what credentials I should use to log in because all I tried failed.
>
>
>
> <image001.png>
>
>
>
> *From:* Arpit Gupta [mailto:[EMAIL PROTECTED] <[EMAIL PROTECTED]>]
>
> *Sent:* Tuesday, April 30, 2013 11:02 AM
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Permission problem
>
>
>
> https://issues.apache.org/jira/browse/HADOOP and select create issue.
>
>
>
> Set the affect version to the release you are testing and add some basic
> description.
>
>
>
> Here are the commands you should run.
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> and
>
>
>
> sudo –u hdfs hadoop fs –chmod -R 777 /data
>
>
>
> chmod is also for the directory on hdfs.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <[EMAIL PROTECTED]>
> wrote:
>
>
>
> I am not sure how to create a jira.
>
>
>
> Again I am not sure I understand your workaround. You are suggesting that
> I create /data/hadoop/tmp on HDFS like:
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I
> indicated it is being used to store data other than that used by hadoop.
> Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
> and tmp folder. Which one of these local folders need to be opened up? I
> would rather not open up all folders to the world if at all possible.
>
>
>
> *From:* Arpit Gupta [mailto:[EMAIL PROTECTED]]
> *Sent:* Tuesday, April 30, 2013 10:48 AM
> *To:* Kevin Burton
> *Cc:* [EMAIL PROTECTED]
> *Subject:* Re: Permission problem
>
>
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs
> directories. Can you create a jira for this?
>
>
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod
> -R /data
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <[EMAIL PROTECTED]>
> wrote:
>
>
>
>
> I am not clear on what you are suggesting to create on HDFS or the local
> file system. As I understand it hadoop.tmp.dir is the local file system. I
> changed it so that the temporary files would be on a disk that has more
> capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
> HDFS. I already have this created.
>
>
>
> Found 1 items
>
> drwxr-xr-x   - mapred supergroup          0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt   - hdfs supergroup          0 2013-04-29 15:45 /tmp
>
>
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
> open up all the data to everyone? Isn’t that a bit extreme? First /data is
> the mount point for this drive and there are other uses for this drive than
> hadoop so there are other folders. That is why there is /data/hadoop. As
> far as hadoop is concerned:
>
>
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
>
> total 12
>
> drwxrwxr-x 4 hdfs   hadoop 4096 Apr 29 16:38 dfs
>
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
>
> drwxrwxrwx 3 hdfs   hadoop 4096 Apr 19 15:14 tmp
>
>
>
> dfs would be where the data blocks for the hdfs file system would go,
> mapred would be the folder for M/R jobs, and tmp would be temporary
> storage. These are all on the local file system. Do I have to make all of
> this read-write for everyone in order to get it to work?
>
>
>
> *From:* Arpit Gupta [mailto:[EMAIL PROTECTED]]
> *Sent:* Tuesday, April 30, 2013 10:01 AM
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB