Why build a cluster

Most distributed applications require a master, coordinator, or controller to manage physically distributed child processes. At present, most of them have to develop private coordinators, which lack a general mechanism, waste of repeatedly writing coordinators, and it is difficult to form a general and scalable coordinator. Zookeeper provides a general distributed lock service to coordinate distributed applications. Therefore, ZooKeeper is a collaboration service for distributed applications. Zookeeper acts as a registry, and both servers and clients have to access it, so if there is a lot of concurrency, there is definitely a wait. Therefore, you can use the ZooKeeper cluster to solve the problem.

The second Leader election

The leader election is the most important and complicated step in the startup process of Zookeeper. So what is a leader election? Why does ZooKeeper need a Leader election? What is the process of the zooKeeper leader election? First let’s look at what leader election is. In fact, it is quite easy to understand. The leader election is just like the presidential election. Everyone votes one vote, and the one who wins the majority votes is elected as the president. In the ZooKeeper cluster, each node votes. If a node obtains more than half of the votes, it becomes the leader node. Take a simple example to illustrate the whole election process. Suppose there is a ZooKeeper cluster consisting of five servers, whose ids range from 1 to 5, and they are all newly started, that is, there is no historical data. In terms of data quantity, they are all the same. Let’s see what happens if these servers start up in order. 1) Server 1 is started. It is the only server that is started. It sends no response, so its election state is always LOOKING. When server 2 is started, it communicates with server 1, which was started initially, and exchanges its election results with each other. As both servers have no historical data, server 2 with a larger ID value wins, but more than half of the servers do not agree Elect it (more than half of this example is 3), so server 1,2 continues to be LOOKING. 3) Server 3 starts. According to the previous theoretical analysis, server 3 becomes the leader among servers 1,2, and 3. Different from the above, it is elected by three servers at this time, so it becomes the leader of this election. Server 4 starts. According to the previous analysis, server 4 should be the largest among servers 1,2,3 and 4 in theory, but because more than half of the servers in front have elected server 3, so it can only receive the life of the other server

1. Construction Requirements Real clusters need to be deployed on different servers, but during our test, starting more than ten virtual machines at the same time would not be enough for memory, so we usually set up pseudo clusters, that is, all services are set up on one VIRTUAL machine and differentiated by ports. Here we require a three-node Zookeeper cluster (pseudo cluster). 2 Preparations Deploy another VM as the test server for setting up the cluster. (1) Install JDK. (2) Upload the Zookeeper package to the server. (3) Decompress Zookeeper and create a data directory. CFG file in conf to zoo. CFG. (4) Create the /usr/local/zookeeper-cluster directory. Copy the decompressed Zookeeper file to the /usr/local/zookeeper-cluster/zookeeper-1 /usr/local/zookeeper-cluster/zookeeper-2 directory /usr/local/zookeeper-cluster/zookeeper-3 [root@localhost ~]# mkdir /usr/local/zookeeper-cluster [root@localhost ~]# cp -r zookeeper-3.4.6 /usr/local/zookeeper-cluster/zookeeper-1 [root@localhost ~]# cp -r zookeeper-3.4.6 /usr/local/zookeeper-cluster/zookeeper-2 [root@localhost ~]# cp -r zookeeper-3.4.6 /usr/local/zookeeper-cluster/zookeeper-3

CFG clientPort of each Zookeeper to 2181 2182 2183. Change /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo. CFG clientPort=2181 dataDir=/usr/local/zookeeper-cluster/zookeeper-1/data Modify/usr/local/zookeeper cluster/zookeeper – 2 / conf/zoo. CFG clientPort = 2182 DataDir = / usr/local/zookeeper cluster/zookeeper – 2 / data/usr/local/modified zookeeper cluster/zookeeper – 3 / conf/zoo. CFG ClientPort =2183 dataDir=/usr/local/zookeeper-cluster/zookeeper-3/data 3 Configure the cluster. (1) Create a myID file in each Zookeeper data directory. The contents are 1, 2 and 3. This file is to record the ID of each server (2) in each Zookeeper zoo.cfg configuration client access port (clientPort) and cluster server IP list. The IP addresses of cluster servers are listed as follows: Server. 1=192.168.25.140:2881:388 server.2=192.168.25.140:2882:3882 Server. 3=192.168.25.140:2883:3883