We noticed that after running several thousand map reduce jobs that our
file system was filling up. The culprit is the libjars that are getting
uploaded to the distributed cache for each job - doesn't look like they're
ever being deleted.
Is there a mechanism to clear the distributed cache (or should this happen
This is probably a straight up hadoop question, but I'm asking here first
in case you've seen this sort of thing with accumulo before.