Ok,
I’m still a bit slow this morning … coffee is not helping…. ;-)

Are we talking HFile or just a single block in the HFile?

While it may be too late for Mike Dillon, here’s the question that the HBase Devs are going to have to think…

How and when do you check on the correctness of the hdfs blocks?
How do you correct?

I’m working under the impression that HBase only deals with one copy of the replicated data and the question that I have is what happens when the block in a file copy that HBase uses is the corrupted block?

What’ happening today?

Thx

-Mike
The opinions expressed here are mine, while they may reflect a cognitive thought, that is purely accidental.
Use at your own risk.
Michael Segel
michael_segel (AT) hotmail.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB