Hi Matt,
If you can guarantee there are a certain # of events in a single "wrapper"
event, or bound the limit, then you could potentially get away with this.
However if you're not careful you could get stuck in an infinite
fail-backoff-retry loop due to exceeding the (configurable) channel
transaction limit. The first limit you will want to tune is the
channel.transactionCapacity parameter, which is simply a sanity-check /
arbitrary limit on the # of events that can be placed into a channel in a
single transaction (this avoids weird bugs like a source that opens a
transaction that never gets committed). The other thing you have got to
watch out for is what your (Flume) batch size looks like, since Flume is
designed to do batching at the RPC layer, not at the event layer like you
are describing.

So basically just make sure that your channel.transactionCapacity > max
batch size * max # sub-events per "wrapper" event.

Hope this makes sense. The above explanation is somewhat subtle and since
it has sharp edges when misconfigured, we just recommend not to do it if

On Tue, Jun 24, 2014 at 10:17 AM, Matt Tenenbaum <[EMAIL PROTECTED]
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB