While the HDFS functionality of computing, storing and validating
checksums for block files does not specifically _require_ ECC, you do
_want_ ECC to avoid frequent checksum failures.

This is noted in Tom's book as well, in the chapter that discusses
setting up your own cluster:
"ECC memory is strongly recommended, as several Hadoop users have
reported seeing many checksum errors when using non-ECC memory on
Hadoop clusters."

On Fri, Mar 28, 2014 at 3:15 PM, reena upadhyay <[EMAIL PROTECTED]> wrote:

Harsh J

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB