Hi , we are going to production and have some questions to ask:
We are using 0.20_append version (as I understand it is hbase 0.90
1) Currently we have to process 50GB text files per day , it can grow to
-- what is the best hadoop file size for our load and are there
suggested disk block size for that size?
-- We worked using gz and I saw that for every files 1 map task
What is the best practice: to work with gz files and save
disc space or work without archiving ?
Lets say we want to get performance benefits and disk
space is less critical.
2) Currently adding additional machine to the greed we need manually
maintain all files and configurations.
Is it possible to auto-deploy hadoop servers without the need to
manually define each one on all nodes?
3) Can we change masters without reinstalling the entire grid
Thank in advance
Steve Loughran 2011-02-09, 10:37
Konstantin Boudnik 2011-02-09, 16:37
Allen Wittenauer 2011-02-09, 18:35