This paper introduces the election process of distributed framework, which is divided into Redis and Zookeeper.

1. Election mechanism of distributed system

Based on the CAP theory, if the CP implemented ensures strong data consistency, there will only be one Leader node in the whole system. When this node fails, in order to ensure strong data consistency, the node needs to be selected before providing services. Notice that the entire system is unserviced.

If the implementation is AP, guarantee the system availability, so the whole system will have more than one master node, each master node can correspond to two or three from the node, such a set of a set of a whole, so if a group of master node hang up, this time the client visit this node can be to fail, leading to loss of data. However, for the entire cluster, services can be provided externally.

2. Redis election mechanism

When the slave master fails, it attempts to perform a Failover to become the new master. Since the missing master may have multiple slaves, there is a process of multiple slaves competing to become the master node.

The process is as follows:

  1. The slave found that its master changed to FAIL
  2. Add one to the currentEpoch and broadcast the FAILOVER_AUTH_REQUEST message
  3. The other nodes receive this message, and only the master responds, determines the validity of the requester, and sends FAILOVER_AUTH_ACK, sending an ACK only once for each epoch
  4. The slave that tries failover collects the FAILOVER_AUTH_ACK returned by the master
  5. The slave will become a new master after receiving more than half of the master’s ACKS.
  6. Slave Broadcasts Pong messages to inform other cluster nodes.

3. Zookeeper election mechanism

Let’s start with the ZAB protocol.

3.1 ZAB agreement

The core of ZooKeeper data consistency is the ==ZAB protocol == (ZooKeeper atomic message broadcast protocol). The agreement needs to do the following:

  • (1) The cluster can normally provide services when less than half of the nodes are down;
  • (2) All write requests from the client are forwarded to the leader for processing. The leader must ensure that the write changes are synchronized to all followers and observers in real time.
  • (3) When the leader breaks down or the whole cluster restarts, it is necessary to ensure that the transactions submitted on the Leader server are finally submitted by all servers, that those transactions only proposed on the Leader server are discarded, and that the cluster can quickly recover to the state before the failure.

3.2 Election Process

During the election phase, the messages exchanged between clusters are called votes. The Vote Vote mainly contains information from two dimensions: ID and ZXID

  • ID Specifies the ID of the leader server to be promoted. This globally unique ID is configured before each ZK node in the cluster starts.
  • ZXID Indicates the transaction ID of the elected leader. This value is taken from the machine DataTree memory, i.e. the transaction has been committed on the machine.

Suppose there are two nodes myid = 1 and myid = 2, the election process is as follows:

  • Machine myID = 1, cast (1,0)
  • If myID = 2, vote (2,0), the default is to vote for yourself first
  • The votes received by the machine with myid = 1 are (2,0). The votes received are compared with the votes cast by the machine, and the one with a large zxid is preferentially selected as the leader. The data contained by the machine with a large zxid is criminal
  • The machine with myid = 2 will throw out (2,0) and receive (1,0) and recommend (2,0) as leader
  • Myid = 1 machine casts (2,0)
  • Myid = 2, (2,0)
  • Myid = 1 and the machine and myID = 2, received votes are (2,0), more than half of the cluster, at this time the election ends, determine myID = 2 machine is the leader