Shahab Yunus 2013-09-03, 21:01
Just to add,
Although it says *masters*, the */conf/masters* actually specifies the
machine where *SecondaryNameNode* will run. Master daemons run on the
machine where you execute the start scripts. If you need to change the
master machine, you must make appropriate changes in the *core-site.xml *and
*mapred-site.xml* files. Also, update the IP and hostname in the *
/etc/hosts* file of your slaves.
On Wed, Sep 4, 2013 at 2:31 AM, Shahab Yunus <[EMAIL PROTECTED]> wrote:
> Keep in mind that there are 2 flavors of Hadoop: the older one without HA
> and the new one with it. Anyway, have you seen the following?
> On Tue, Sep 3, 2013 at 4:54 PM, Tomasz Chmielewski <[EMAIL PROTECTED]>wrote:
>> Just starting with hadoop and hbase, and couldn't find specific answers
>> in official documentation (unless I've missed the obvious).
>> Assuming I have three hadoop servers: h1, h2 and h3, with h1 being a
>> master+slave - what is a recovery scenario if the master server, h1,
>> died and is beyond repair (burned with all disks and got flooded)?
>> Do I just edit conf/masters file on any of the remaining slaves (say,
>> h2), make it a master, and start the NameNode and JobTracker there?
>> Can anyone point me to relevant documentation?
>> Tomasz Chmielewski