On Mon, Apr 14, 2014 at 6:32 PM, Vladimir Rodionov
I'd actually disagree. 100 is probably significantly faster than 1, given
that most machines have 12 spindles. So, yes, you'd be multiplexing 8 or so
logs per spindle, but even 100 logs only requires a few hundred MB worth of
buffer cache in order to get good coalescing of writes into large physical

If memory is really constrained on your machine, you'll probably get some
throughput collapse as you enter some really inefficient dirty throttling,
but so long as you leave a few GB unallocated, I bet the reality is much
closer to what I said than you might think.


Todd Lipcon
Software Engineer, Cloudera

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB