Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> fs cache giving me headaches


Copy link to this message
-
Re: fs cache giving me headaches
I am a little confused....
I do create new ugis, and i do not hand them off to threads. However i
assumed that FileSystem.get(conf) would fetch from the filesystem cache
based on the ugi (based on equality that is, not identity). So my
assumption was that if different threads create ugis that are equal, they
would fetch the same FileSystem from the cache. Is that wrong?

On Tue, Aug 7, 2012 at 11:25 AM, Daryn Sharp <[EMAIL PROTECTED]> wrote:

> There is no UGI caching, so each request will receive a unique UGI even
> for the same user.  Thus you can safely call FileSystem.closeAllForUGI(ugi)
> when the request is complete.  If however you spin off threads that
> continue to use the UGI even after the request is completed, then you'll
> have to determine for yourself when it's safe to close the filesystems.
>
> I've been kicking around a few ways to transparently close cached
> filesystems for a ugi when that ugi goes out of scope.  I should probably
> file a jira (if it stops going down) for discussion.
>
> Daryn
>
>
> On Aug 7, 2012, at 10:15 AM, Koert Kuipers wrote:
>
> Daryn,
> The problem with FileSystem.closeAllForUGI(ugi) for me is that a server
> can be multi-threaded, and a user could be doing multiple request at the
> same time, so if i used closeAllForUGI isn't there a risk of shutting down
> the other requests for the same user?
>
> On Mon, Aug 6, 2012 at 2:52 PM, Daryn Sharp <[EMAIL PROTECTED]> wrote:
>
>> Yes, the implementation of fs.close() leaves something to be desired.
>>  There's actually been debate in the past about close being a no-op for a
>> cached fs, but the idea was rejected by the majority of people.
>>
>> In the server case, you can use FileSystem.closeAllForUGI(ugi) at the end
>> of a request to flush all the fs cache entries for the ugi.  You'll get the
>> benefit of the cache during execution of the request, and be able to close
>> the cached fs instances to prevent memory leaks. I hope this helps!
>>
>> Daryn
>>
>>
>> On Aug 6, 2012, at 12:32 PM, Koert Kuipers wrote:
>>
>> ---------- Forwarded message ----------
>> From: "Koert Kuipers" <[EMAIL PROTECTED]>
>> Date: Aug 4, 2012 1:54 PM
>> Subject: fs cache giving me headaches
>> To: <[EMAIL PROTECTED]>
>>
>> nothing has confused me as much in hadoop as FileSystem.close().
>> any decent java programmer that sees that an object implements Closable
>> writes code like this:
>> Final FileSystem fs = FileSystem.get(conf);
>> try {
>>     // do something with fs
>> } finally {
>>     fs.close();
>> }
>>
>> so i started out using hadoop FileSystem like this, and i ran into all
>> sorts of weird errors where FileSystems in unrelated code (sometimes not
>> even my code) started misbehaving and streams where unexpectedly shut. Then
>> i realized that FileSystem uses a cache and close() closes it for everyone!
>> Not pretty in my opinion, but i can live with it. So i checked other code
>> and found that basically nobody closes FileSystems. Apparently the expected
>> way of using FileSystems is to simple never close them. So i adopted this
>> approach (which i think is really contrary to java conventions for a
>> Closeable).
>>
>> Lately i started working on some code for a daemon/server where many
>> FileSystems objects are created for different users (UGIs) that use the
>> service. As it turns out other projects have run into trouble with the
>> FileSystem cache in situations like this (for example, Scribe and Hoop). I
>> imagine the cache can get very large and cause problems (i have not tested
>> this myself).
>>
>> Looking at the code for Hoop i noticed they simply turned off the
>> FileSystem cache and made sure to close every FileSystem. So here the
>> suggested approach to deal with FileSystems seems to be:
>> Final FileSystem fs = FileSystem.newInstance(conf); // or
>> FileSystem.get(conf) but with caching turned off in the conf
>> try {
>>     // do something with fs
>> } finally {
>>     fs.close();
>> }
>>
>> This code bypasses the cache if i understand it correctly, avoiding any