> When the user calling FileSystem.copyFromLocalFile() doesn't have permission
> to write to certain hdfs path:
> Thread [main] (Suspended (exception AccessControlException))
> DFSClient.mkdirs(String, FsPermission) line: 905
> DistributedFileSystem.mkdirs(Path, FsPermission) line: 262
> DistributedFileSystem(FileSystem).mkdirs(Path) line: 1162
> FileUtil.copy(FileSystem, Path, FileSystem, Path, boolean, boolean,
> Configuration) line: 194
> DistributedFileSystem(FileSystem).copyFromLocalFile(boolean, boolean,
> Path, Path) line: 1231
> DistributedFileSystem(FileSystem).copyFromLocalFile(boolean, Path, Path)
> line: 1207
> DistributedFileSystem(FileSystem).copyFromLocalFile(Path, Path) line:
> GridM2mInstallation.copyInputFiles(FlowConfigurations$FlowConf) line:
> Passwordless ssh has been setup for current user, tyu, on localhost and user
> hadoop on Namenode.
> I want to get opinion on how I can programmatically get pass the above
> exception - by specifying user as hadoop, maybe ?
Is there a reason why access cannot be given to the user on DFS during
setup, using dfs -chmod or dfs -chown ? That seems a more correct
solution. Please note that while some versions of Hadoop allowed user
name to be set as a configuration property (I think it was called
hadoop.job.ugi or some such), it will stop working with later, secure
versions of Hadoop.