Does Hadoop, HDFS in particular, do any sanity checks of the file before
and after balancing/copying/reading the files? We have 20TB of data and I
want to make sure after these operating are completed the data is still in
good shape. Where can I read about this?
--- Get your facts first, then you can distort them as you please.--