Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Reg:upgrade from 20.205 to hadoop2


Copy link to this message
-
Reg:upgrade from 20.205 to hadoop2
Hi All,

I had cluster running with 20.205 version with 1GB data. Then I upgraded to hadoop2(Upgrade was successful).After up-gradation I am not able to read 100 blocks..Then I gone through code,while upgrading we are not considering blocks which are in BBW(only current folder we are hardlink to new current file).I think we need to handle BBW blocks also since hbase will sync files(below blocksize) without close which will be stored in BBW..This is like dataloss only..

Can anyone please look into this..

Let me know,If I am wrong.

Thanks And Regards
Brahma Reddy
+
Suresh Srinivas 2012-07-26, 15:27
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB