Answers inline below.
On Sun, Oct 14, 2012 at 8:01 PM, 谢良 <[EMAIL PROTECTED]> wrote:
> Hi Todd and other HA experts,
> I've two question:
> 1) why the zkfc is a seperate process, i mean, what's the primary design
> consideration that we didn't integrate zkfc features into namenode self ?
There are a few reasons for this design choice:
1) Like Steve said, it's easier to monitor a process from another process,
rather than self-monitor. Consider, for example, what happens if the NN
somehow gets into a deadlock. The process may still be alive, and a
ZooKeeper thread would keep running, even though it is not successfully
handling any operations. The ZKFC running in a separate process
periodically pings the local NN via RPC to ensure that the RPC server is
still working properly, not deadlocked, etc.
2) If the NN process itself crashes (eg segfault due to bad RAM), the ZKFC
will notice it quite quickly, and delete its own zookeeper node. If the NN
were holding its own ZK session, you would have to wait for the full ZK
session timeout to expire. So the external ZKFC results in a faster
failover time for certain classes of failure.
2) If i deploy CDH4.1(included QJM feature), since QJM can do fencing
> writer, so can i just config like this safely ?
Yes, this is safe. The design of the QuorumJournalManager ensures that
multiple conflicting writers cannot corrupt your namespace in any way. You
might still consider using sshfence ahead of that, with a short configured
timeout -- this provides "read fencing". Otherwise the old NN could
theoretically serve stale reads for a few seconds before it noticed that it
lost its ZK lease. But it's definitely not critical -- the old NN will
eventually do some kind of write and abort itself. So, I'd recommend
/bin/true as the last configured method in your fencing list with QJM.
Software Engineer, Cloudera