Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # dev >> startMiniDFSCluster and file permissions


+
lars hofhansl 2011-10-27, 21:53
+
Ted Yu 2011-10-27, 22:03
+
lars hofhansl 2011-10-27, 22:20
+
Stack 2011-10-27, 22:41
+
lars hofhansl 2011-10-27, 23:16
+
Stack 2011-10-27, 23:18
Copy link to this message
-
Re: startMiniDFSCluster and file permissions
This is fixed by HDFS-1560, though unfortunately it's not in
0.20.205.0.  I've just been running with umask set to 022.
On Thu, Oct 27, 2011 at 4:16 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:
> Hmm... checkDir eventually calls checkPermission and that does an equals check on the expected and actual permissions.
>
> So we'd need to set DATA_DIR_PERMISSION_KEY to (777 XOR umask). Ugh.
>
>
> -- Lars
>
>
> ----- Original Message -----
> From: Stack <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Cc: lars hofhansl <[EMAIL PROTECTED]>
> Sent: Thursday, October 27, 2011 3:41 PM
> Subject: Re: startMiniDFSCluster and file permissions
>
> Why don't we set DATA_DIR_PERMISSION_KEY to be permissive just before
> we spin up minidfscluster?
> St.Ack
>
>
>
> On Thu, Oct 27, 2011 at 3:03 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
>> I think Apache Jenkins doesn't have this problem - otherwise we should have
>> seen it by now.
>>
>> FYI:
>> http://www.avajava.com/tutorials/lessons/how-do-i-set-the-default-file-and-directory-permissions.html
>>
>> On Thu, Oct 27, 2011 at 2:53 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:
>>
>>> I just noticed today that I could not run any test that starts a
>>> MiniDFSCluster.
>>>
>>> The exception I got was this:
>>> java.lang.NullPointerException
>>>         at
>>> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:422)
>>>         at
>>> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:280)
>>>         at
>>> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:350)
>>>         at
>>> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:519)
>>>         at
>>> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:475)
>>>         at
>>> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:462)
>>>
>>> In the logs I had:
>>> 2011-10-27 14:17:48,238 WARN  [main] datanode.DataNode(1540): Invalid
>>> directory in dfs.data.dir: Incorrect permission for
>>> /home/lars/dev/hbase-trunk/target/test-data/8f8d2437-1d9a-42fa-b7c3-c154d8e559f3/dfscluster_557b48bc-9c8e-4a47-b74e-4c0167710237/dfs/data/data1,
>>> expected: rwxr-xr-x, while actual: rwxrwxr-x
>>> 2011-10-27 14:17:48,260 WARN  [main] datanode.DataNode(1540): Invalid
>>> directory in dfs.data.dir: Incorrect permission for
>>> /home/lars/dev/hbase-trunk/target/test-data/8f8d2437-1d9a-42fa-b7c3-c154d8e559f3/dfscluster_557b48bc-9c8e-4a47-b74e-4c0167710237/dfs/data/data2,
>>> expected: rwxr-xr-x, while actual: rwxrwxr-x
>>> 2011-10-27 14:17:48,261 ERROR [main] datanode.DataNode(1546): All
>>> directories in dfs.data.dir are invalid.
>>>
>>>
>>> And indeed I see this in
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(...):
>>>
>>>     FsPermission dataDirPermission >>>       new FsPermission(conf.get(DATA_DIR_PERMISSION_KEY,
>>>                                 DEFAULT_DATA_DIR_PERMISSION));
>>>     for (String dir : dataDirs) {
>>>       try {
>>>         DiskChecker.checkDir(localFS, new Path(dir), dataDirPermission);
>>>         dirs.add(new File(dir));
>>>       } catch(IOException e) {
>>>         LOG.warn("Invalid directory in " + DATA_DIR_KEY +  ": " +
>>>                  e.getMessage());
>>>       }
>>>     }
>>>
>>>
>>> (where DEFAULT_DATA_DIR_PERMISSION is 755)
>>>
>>>
>>> The default umask on my machine is 0002, so that would seem to explain the
>>> discrepancy.
>>>
>>> Changing my umask to 0022 fixed the problem!
>>> I cannot be the only one seeing this. This is just a heads for anyone who
>>> runs into this, as I wasted over an hour on this.
>>>
>>> I assume this is due to the switch to hadoop 0.20.205.
>>>
>>> As I am fairly ignorant about Maven... Is there a way to set the default
>>> umask automatically for the test processes?
>>>
>>> -- Lars
>>>
>>>
>>
>
>
+
lars hofhansl 2011-10-27, 23:34
+
Gary Helmling 2011-10-27, 23:40
+
Andrew Purtell 2011-10-27, 23:38
+
lars hofhansl 2011-10-28, 01:09
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB