Are there any filesystem stress tests? I.e
-running 100 small mapreduce jobs
-recursively deleting and recreating a directory multiple times from different clients, concurrently.
-attempting to streaming io into the same file concurrently from different tasks
I don't believe so - as most smokes/ are mapreduce oriented which is very structured.
Proposal: a stress/ submodule under test-artifacts which does some of these iterative metadata intensive operations in filesystem agnostic way.