On Sun, Jul 8, 2012 at 11:14 PM, Juhani Connolly <
[EMAIL PROTECTED]> wrote:
> Another matter that I'm curious of is whether or not we actually need
> separate files for the data and checkpoints...
The data file and checkpoint files serve different purpose. Checkpoint
resides in memory and simulates the channel. The only difference is that it
does not store the data in the queue itself, but pointers to data that
resides in the log files. As a result the memory footprint of the
checkpoint is very small regardless of how big each event payload is. This
size only depends upon the capacity of the channel and nothing else.
> Can we not add a magic header before each type of entry to differentiate,
> and thus guarantee significantly more sequential access?
In the general case access will be sequential. In the best case, the
channel will have moved the writes to new log files and continue to do
reads from old (rolled) files which reduce seek contention. From what I
know, I don't think it will be trivial to affect your suggested change
without significantly impacting the entire logic of the channel.
> What is killing performance on a single disk right now is the constant
> seeks. The problem with this though would be putting together a file format
> that allows quick seeking through to the correct position, and rolling
> would be a lot harder. I think this is a lot more difficult and might be
> more of a long term target.
Perhaps what you are describing is a different type of persistent channel
that is optimized for high latency IO systems. I would encourage you to
take your idea one step further and see if that can be drafted as yet
another channel that serves this particular use-case.
> Arvind Prabhakar
> On Wed, Jul 4, 2012 at 3:33 AM, Juhani Connolly <
> [EMAIL PROTECTED]> wrote:
>> It looks good to me as it provides a nice balance between reliability and
>> It's certainly one possible solution to the issue, though I do believe
>> that the current one could be made more friendly towards single disk
>> access(e.g. batching writes to the disk may well be doable and would be
>> curious what someone with more familiarity with the implementation thinks).
>> On 07/04/2012 06:36 PM, Jarek Jarcec Cecho wrote:
>>> We had connected discussion about this "SpillableChannel" (working name)
>>> on FLUME-1045 and I believe that consensus is that we will create something
>>> like that. In fact, I'm planning to do it myself in near future - I just
>>> need to prioritize my todo list first.
>>> On Wed, Jul 04, 2012 at 06:13:43PM +0900, Juhani Connolly wrote:
>>>> Yes... I was actually poking around for that issue as I remembered
>>>> seeing it before. I had before also suggested a compound channel
>>>> that would have worked like the buffer store in scribe, but general
>>>> opinion was that it provided too many mixed configurations that
>>>> could make testings and verifying correctness difficult.
>>>> On 07/04/2012 04:33 PM, Jarek Jarcec Cecho wrote:
>>>>> Hi Juhally,
>>>>> while ago I've filled jira FLUME-1227 where I've suggested creating
>>>>> some sort of SpillableChannel that would behave similarly as scribe. It
>>>>> would be normally acting as memory channel and it would start spilling data
>>>>> to disk in case that it would get full (my primary goal here was to solve
>>>>> issue when remote goes down, for example in case of HDFS maintenance).
>>>>> Would it be helpful for your case?
>>>>> On Wed, Jul 04, 2012 at 04:07:48PM +0900, Juhani Connolly wrote:
>>>>>> Evaluating flume on some of our servers, the file channel seems very
>>>>>> slow, likely because like most typical web servers ours have a
>>>>>> single raided disk available for writing to.
>>>>>> Quoted below is a suggestion from a previous issue where our poor
>>>>>> throughput came up, where it turns out that on multiple disks, file