You can do this and it should work fine. You would have to keep adding
machines to get disk capacity, of course, since your data set would
only grow.

We will keep an open file descriptor per file, but I think that is
okay. Just set the segment size to 1GB, then with 10TB of storage that
is only 10k files which should be fine. Adjust the OS open FD limit up
a bit if needed. File descriptors don't use too much memory so this
should not hurt anything.


On Thu, Feb 21, 2013 at 4:00 PM, Anthony Grimes <[EMAIL PROTECTED]> wrote:

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB