Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - changing ha failover auto conf value


+
Quentin Ambard 2012-11-22, 19:11
Copy link to this message
-
Re: changing ha failover auto conf value
Harsh J 2012-11-22, 19:21
Hi,

Losing a complete node (ZKFC plus NN) with a journal node (QJM)
configuration shouldn't be causing automatic failover to fail. Could
you post up both your NameNode and ZKFC logs somewhere we can take a
look?

On Fri, Nov 23, 2012 at 12:41 AM, Quentin Ambard
<[EMAIL PROTECTED]> wrote:
> Hello,
> I have 2 namenodes in ha mode, running with 3 journal node, 3 zookeeper
> servers and 2 zkfc (one with each namenode)
>
> If a server with the activated namenode and a zkfc get both down, the single
> instance of zkfc can't activate the standby namenode.
>
> So I end with a single namenode in standby mode.
> I can try to activate it with the following :
> hdfs haadmin -transitionToActive nn1 --forcemanual
>
> But it's recommended to disable the automatic failover to avoid split-brain.
> To do so, i stop all my namenode and set the
> dfs.ha.automatic-failover.enabled property to false.
>
> However, restarting the namenode doesn't change this configuration, i'm
> still getting the same warning while trying to activate the namenode.
>
> How can I change this configuration value ?
>
> Do I really need to have 3 namenode to avoid this situation (namenode
> manually activation), or can I achieve a full-auto conf with only 2 namenode
> ?
>
>
> Thanks for your help
>
>
> --
> Quentin Ambard

--
Harsh J
+
Quentin Ambard 2012-11-22, 21:43