Does anybody know about the issue mentioned in the following mail?
Update: I have seen following behaviour now even for time based rolling.
By time based rolling I would expect: That single file should be created
after x seconds.
But in my case some n files are created after every x seconds.
Is it something to do with HDFS batch size?
-------- Original Message --------
Subject: HDFS file rolling behaviour
Date: Thu, 13 Sep 2012 14:26:56 +0530
From: Jagadish Bihani <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
I use two flume agents:
1. flume_agent 1 which is a source with (exec source -file channel -avro
2. flume_agent 2 which is a dest with (avro source -file channel - HDFS
I have observed that for HDFS sink with rolling by *file size/number of
creates a lot of simultaneous connections to source's avro sink. But
while rolling by *time interval* it does it *one by one* i.e. opens 1
HDFS file write to
it and then close it. I expect for other rolling intervals too same
thing should happen
i.e. first open file and if x number of events are written to it then
roll it and open another
and so on.
In my case my data ingestion works fine with "time" based rolling but in
cases due to the above behaviour I get exceptions like:
-- too many open files
-- timeout related exceptions for file channel and few more exceptions.
I can increase the values of the parameters giving exceptions but I dont
adverse effects it may have.
Can somebody throw some light on the rolling based on file size/number
of events ?