I am archiving a large amount of data out of my HDFS file system to a
separate shared storage solution (There is not much HDFS space left in my
cluster, and upgrading it is not an option right now).

I understand that HDFS internally manages checksums and won't succeed if
the data doesn't match the CRC, so I'm not worried about corruption when
reading from HDFS.

However, I want to store the HDFS crc calculations alongside the data files
after exporting them. I thought the "hadoop dfs -copyToLocal -crc
<hdfs-source> <local-dest>" command would work, but it always gives me the
error "-crc option is not valid when source file system does not have crc
files"

Can someone explain what exactly that option does, and when (if ever) it
should be used?

Thanks in advance!

--Tom
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB