Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # dev >> relative symbolic links in HDFS

Copy link to this message
Re: relative symbolic links in HDFS
Universal support for FileContext & symlinks in all commands should be coming "soon".  A few jiras that removed complications recently were committed or in the process of being committed.  Copy commands will require some extra parameters to control whether symlinks are dereferenced.

On Oct 31, 2011, at 11:27 AM, Charles Baker wrote:

> Hey guys. Thanks for the replies. Fully qualified symbolic links are
> problematic in that when we wish to restore a directory structure containing
> symlinks from HDFS to local filesystem, the relativity is lost. For instance:
> /user/cbaker/foo/
>                link1 -> ../../cbaker
> The current behavior of getFileLinkStatus() results in a path for link 1
> being:
> /user/cbaker
> Not:
> ../../cbaker
> Also, some symlinks may point to non-existent locations within HDFS which
> only have relevance to the local filesystem. This appears as though it could
> (though I haven't tested yet) result in an exception when the attempt is made
> to qualify it. If I get a chance, I'll try it out later today.
> FileContext.getLinkTarget() doesn't work for this case since it returns only
> the final component of the target, not the complete relative path. But even
> if it did return the relative path, it seems counter-intuitive to me. I agree
> with Daryn and expect the behavior of getFileLinkStatus() to return the
> symlink as is and not presume that I wanted it qualified. If I wanted a
> qualified path for a symlink, I would expect to call Path.makeQualified() to
> do so.
> Insofar as porting FsShell to FileContext, I've only modified it to support
> our use-case. I haven't gone to the extent of fully porting it to
> FileContext. Though I'd love to, unfortunately I'm too busy right now to
> contribute :(
> Thanks!
> -Chuck
> -----Original Message-----
> From: Daryn Sharp [mailto:[EMAIL PROTECTED]]
> Sent: Monday, October 31, 2011 7:46 AM
> Subject: Re: relative symbolic links in HDFS
> It's generally been a problem that filesystem operations mangle paths to be
> something other than what the user provided.  FsShell has to go to some
> (unnecessary, imho) lengths to independently track the user's given path so
> the output paths will match what the user provided.  Not displaying the
> user-given path makes it difficult/impossible for scripts to accurately parse
> the output for the results of an operation on the given paths.
> I like getLinkTarget returning the exact target, but I'd also like a
> FileStatus to return the given path both in the case of a normal path and a
> symlink.  If the user needs a fully qualified path for an operation, my
> opinion is they should request it?
> Daryn
> On Oct 29, 2011, at 9:02 PM, Eli Collins wrote:
>> Hey Chuck,
>> Why is it problematic for your use that the symlink is stored in
>> FileStatus fully qualified - you'd like FileContext#getSymlink to
>> return the same Path that you used as the target in createSymlink?
>> The current behavior is so getFileLinkStatus is consistent with
>> getFileStatus(new Path("/some/file")) which returns a fully qualified
>> path (eg hdfs://myhost:123/some/file).   Note that you can use
>> FileContext#getLinkTarget to return the path used when creating the
>> link. Some more background is in the design doc:
>> https://issues.apache.org/jira/secure/attachment/12434745/design-doc-v4.txt
>> There's a jira for porting FsShell to FileContext (HADOOP-6424), if
>> you have a patch (even partial) feel free to post it to the jira.
>> Note that since symlinks are not implemented in FileSystem, clients
>> that use FileSystem to access paths with symlinks will fail.
>> Btw when looking at the code you pointed out I noticed a bug in link
>> resolution (HADOOP-7783), thanks!
>> Thanks,
>> Eli
>> On Fri, Oct 28, 2011 at 9:46 AM, Charles Baker <[EMAIL PROTECTED]> wrote:
>>> Hey guys. We are in the early stages of planning and evaluating a hadoop