Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Flume, mail # dev - Review Request: First patch for Spillable Channel.. capturing feedback.


+
Roshan Naik 2013-02-21, 20:40
+
Roshan Naik 2013-02-21, 20:40
Copy link to this message
-
Re: Review Request: First patch for Spillable Channel.. capturing feedback.
Mike Percy 2013-03-06, 06:14

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/9544/#review17457
-----------------------------------------------------------
Hi Roshan,
First off, thanks for the patch! I have some high level feedback and some requests:
1. I'd like to suggest a different channel composition mechanism. Today, all channels are given a chance to access all other active channels in the postConfigure() hook. See my comments below about that. It would be better to specify the underlying channel types via configuration of the spillable channel and instantiate those underlying implementations in the configure() method. start() and stop() can start and stop the underlying implementation as they do today.
2. What happens when the Flume agent is stopped and then restarted when events are in the channel? To what extent does this channel recover events?
3. Could you provide a summary of the design you went with here? Just some kind of small design doc or explanation, since on the JIRA we discussed multiple directions. Specifically, I'm interested in the algorithm being used (i.e. with the drain order queue) and what the expected behavior is under boundary conditions (full channels, permanently failed channels (bad disk), concurrency under mixed success/failure).
4. What assumptions are made / invariants are required for the underlying channels?
flume-ng-node/src/main/java/org/apache/flume/node/AbstractConfigurationProvider.java
<https://reviews.apache.org/r/9544/#comment37030>

    This hook violates Flume's isolation of managed components (of the same role) from each other. This could be a big maintenance headache due to interdependencies, so let's find a way to maintain that isolation.
- Mike Percy
On Feb. 21, 2013, 8:40 p.m., Roshan Naik wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/9544/
> -----------------------------------------------------------
>
> (Updated Feb. 21, 2013, 8:40 p.m.)
>
>
> Review request for Flume.
>
>
> Description
> -------
>
> Initial patch for capturing feedback on SpillableChannel.
>
> Open Issues:  
> a) Config & Overflow Specification :
> It currently allows configuration (as shown below)  for selecting any other channel as a overflow. The current config subsystem doesn't support this mechanism well. To make this happen, i have modified the config subsystem. A postconfig step is introduced to enable a channel to obtain a reference to another configured channel by name.
>  One suggested alternative has been to hard code the file channel as the overflow channel at compile time. Downside would be that if a new SSD optimized channel is introduced in the future, the users of Spillable channel would not be able to take advantage of it.
>
> b) Max Transaction Size: This implementation requires a config param  maxTransactionBatchSize, which is the size of the largest batchSize setting used by all the sinks and sources attached to this channel. This helps a good deal in optimizing the implementation of put() and take() internally. Consequently, unlike the transactionSize used in other channels, this value should be a much smaller and closer to the actual batchSize settings in the sources/sinks. Hari had mentioned some concerns in this regard which I think had something to do with not all sources being able to know their batchSize upfront. I not totally clear on why it is the case. Would appreciate more insight.
>
> Notes:
>
> a) Basic algorithm: This implementation avoids copying events between the in-memory queue and the overflow for better performance. at the time of put(), a decision is made to either put the entire transaction in memory or into overflow based on maxTransactionSize  and slots available in memory queue.  Another queue (called 'drain order queue') is used to memorize the order in which the events were inserted. For example these values in the  drain order queue ... +1000,-200,+400,-100 ... indicate the first 1000 elements are in memory, next 200 (the -ve number) are in overflow, next 400 are in memory and so on. This is used at the time of take() to drain elements in the right order directly from memory/overflow and avoid copying back and forth between memory & overflow. Any put/take Transaction will restricted to either the memory or overflow. Allowing it to operate on both will violate correctness of rollback/commit by requiring nested transactions.