Brahma Reddy Battula 2012-07-26, 14:51
I think this is a real issue. Created HDFS-3731.
On Jul 26, 2012, at 7:51 AM, Brahma Reddy Battula <[EMAIL PROTECTED]> wrote:
> Hi All,
> I had cluster running with 20.205 version with 1GB data. Then I upgraded to hadoop2(Upgrade was successful).After up-gradation I am not able to read 100 blocks..Then I gone through code,while upgrading we are not considering blocks which are in BBW(only current folder we are hardlink to new current file).I think we need to handle BBW blocks also since hbase will sync files(below blocksize) without close which will be stored in BBW..This is like dataloss only..
> Can anyone please look into this..
> Let me know,If I am wrong.
> Thanks And Regards
> Brahma Reddy