-How is hadoop going to handle the next generation disks?
Edward Capriolo 2011-04-08, 04:15
I have a 0.20.2 cluster. I notice that our nodes with 2 TB disks waste
tons of disk io doing a 'du -sk' of each data directory. Instead of
'du -sk' why not just do this with java.io.file? How is this going to
work with 4TB 8TB disks and up ? It seems like calculating used and
free disk space could be done a better way.
sridhar basam 2011-04-08, 15:37
sridhar basam 2011-04-08, 16:24
Edward Capriolo 2011-04-08, 17:59
sridhar basam 2011-04-08, 18:51