Bhaskar V. Karambelkar 2012-08-23, 22:24
Mike Percy 2012-08-24, 01:35
Bhaskar V. Karambelkar 2012-08-24, 19:13
Thanks for the additional info Bhaskar. So is this a known issue in vanilla
Hadoop 1.0.3? If so do you have a JIRA number?
On Fri, Aug 24, 2012 at 12:13 PM, Bhaskar V. Karambelkar <
[EMAIL PROTECTED]> wrote:
> oops, this is just the same Hadoop's FileSystem.close() shutdown hook
> issue. I was getting the exception no matter whether I had 1 HDFS sink or
> I was using hadoop vanilla 1.0.3, and looks like that one doesn't respect
> the fs.automatic.close option.
> Switched to CDH3u5, and no more problems, all the HDFS sinks correctly
> rename the file on shutdown.
> In conclusion, the vanilla hadoop 1.x series is not an option for flume.
> Go with Hadoop 2.x or CDH3u5, CDH4
> On Thu, Aug 23, 2012 at 9:35 PM, Mike Percy <[EMAIL PROTECTED]> wrote:
>> Hmm... this likely happens because Hadoop statically caches the
>> FileSystem object, so as it turns out, the multiple Sinks are sharing the
>> same FileSystem objects.
>> I think the only reason we need to explicitly close the FileSystem
>> objects is to support the deleteOnExit feature. We are explicitly closing
>> them because we removed the automatic shutdown hook typically installed by
>> Hadoop to invoke FileSystem.close(), since it was interfering with the .tmp
>> rolling. I wonder if we can get away with never closing them in our
>> case... I'm not sure if we need the deleteOnExit() functionality implicitly
>> for any reason, or if there are other more important reasons behind why the
>> FileSystem objects should be closed.
>> On Thu, Aug 23, 2012 at 3:24 PM, Bhaskar V. Karambelkar <
>> [EMAIL PROTECTED]> wrote:
>>> I have 3 HDFS sinks all writing to the same namenode, but different
>>> e.g. sink1 = hdfs://namenode/path1
>>> sink2 = hdfs://namenode/path2
>>> When flume is shutdown (kill <flume-pid>), the file for the first sink
>>> is closed correctly and renamed to remove the .tmp extension
>>> but the second file's closing throws the following exception and the
>>> file's .tmp extension is also not removed.
>>> I see this happening very consistently, for 1+ HDFS sinks, only the
>>> first one is closed properly and renamed, the rest all throw exception
>>> when being closed, and are not renamed to remove the .tmp extension.
>>> 2012-08-23 19:51:39,837 WARN hdfs.BucketWriter: failed to close()
>>> HDFSWriter for file
>>> Exception follows.
>>> java.io.IOException: Filesystem closed
>>> at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264)
>>> at org.apache.hadoop.hdfs.DFSClient.access$1100(DFSClient.java:74)
>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
Bhaskar V. Karambelkar 2012-08-27, 14:53
Mike Percy 2012-08-29, 08:30