Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS, mail # user - Re: Filesystem closed exception


Copy link to this message
-
Re: Filesystem closed exception
Hemanth Yamijala 2013-01-25, 08:34
Thanks, Harsh. Particularly for pointing out HADOOP-7973.
On Fri, Jan 25, 2013 at 11:51 AM, Harsh J <[EMAIL PROTECTED]> wrote:

> It is pretty much the same in 0.20.x as well, IIRC. Your two points
> are also correct (for a fix to this). Also see:
> https://issues.apache.org/jira/browse/HADOOP-7973.
>
> On Fri, Jan 25, 2013 at 6:56 AM, Hemanth Yamijala
> <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > We are noticing a problem where we get a filesystem closed exception
> when a
> > map task is done and is finishing execution. By map task, I literally
> mean
> > the MapTask class of the map reduce code. Debugging this we found that
> the
> > mapper is getting a handle to the filesystem object and itself calling a
> > close on it. Because filesystem objects are cached, I believe the
> behaviour
> > is as expected in terms of the exception.
> >
> > I just wanted to confirm that:
> >
> > - if we do have a requirement to use a filesystem object in a mapper or
> > reducer, we should either not close it ourselves
> > - or (seems better to me) ask for a new version of the filesystem
> instance
> > by setting the fs.hdfs.impl.disable.cache property to true in job
> > configuration.
> >
> > Also, does anyone know if this behaviour was any different in Hadoop
> 0.20 ?
> >
> > For some context, this behaviour is actually seen in Oozie, which runs a
> > launcher mapper for a simple java action. Hence, the java action could
> very
> > well interact with a file system. I know this is probably better
> addressed
> > in Oozie context, but wanted to get the map reduce view of things.
> >
> >
> > Thanks,
> > Hemanth
>
>
>
> --
> Harsh J
>