This article source: making here | | GitEE, click here

I. Introduction to the framework

1. Basic Introduction

Zookeeper is designed based on the observer mode. It applies to distributed system architectures, such as unified naming service, unified configuration management, unified cluster management, dynamic online and offline of server nodes, and soft load balancing.

  • Installing Zookeeper on a Linux vm
  • SpringBoot integrates the Zookeeper middleware

2. Cluster elections

The Zookeeper cluster is based on the half mechanism. More than half of the servers in the cluster are alive and the cluster is available. Therefore, you are advised to install an odd number of Servers in the Zookeeper cluster. Master and Slave are not specified in the cluster configuration file. When Zookeeper works, one node serves as the Leader and the other nodes serve as followers. The Leader is temporarily elected through an internal election mechanism.

A basic description

Assume that there is a Zookeeper cluster consisting of three servers. The myID numbers of each node are 1-3 in sequence. Start the servers in sequence.

Server1 starts and performs an election. Server 1 votes for itself. At this point, server 1 has one vote, and the election cannot be completed if the number of votes is not more than half (2 votes). The state of server 1 remains LOOKING.

Server2 starts, and another election is performed. Server 1 and server 2 each vote for themselves and exchange vote information. Since server 2’s MYID is larger than server 1’s myID, server 1 changes the vote to server 2’s. At this point, server 1 has zero votes and server 2 has two votes, reaching more than half. The election is completed. Server 1 is in the follower state, and Server 2 remains in the leader state.

2. Cluster configuration

1. Create a configuration directory

# mkdir -p /data/zookeeper/data
# mkdir -p /data/zookeeper/logs
Copy the code

2. Basic configuration

# vim/opt/zookeeper 3.4.14 / conf/zoo. The CFG

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs
clientPort=2181
Copy the code

3. Single-node configuration

# vim /data/zookeeper/data/myid 
Copy the code

Three node services, respectively in myid file [1,2,3]

4. Cluster service

In the zoo.cfg configuration file for each service, write the following configuration:

Server. 1 = 192.168.72.133:2888-3888 for server 2 = 192.168.72.136:2888:3888 server. 3 = 192.168.72.137:2888-3888Copy the code

5. Start the cluster

Start three ZooKeeper services respectively

[zookeeper - 3.4.14]# bin/zkServer.sh start
Starting zookeeper ... STARTED
Copy the code

6. Check the cluster status

Mode: the leader node is the Master node

Mode: follower is the Slave node

[zookeeper - 3.4.14]# bin/zkServer.sh status
Mode: leader
Copy the code

7. Cluster status test

Log in to any client of one service, create a test node, and then look at the other services.

[zookeeper - 3.4.14 bin]# ./zkCli.sh
[zk: 0] create /node-test01 node-test01  
Created /node-test01
[zk: 1] get /node-test01
Copy the code

Or shut down the leader node

[zookeeper - 3.4.14 bin]# ./zkServer.sh stop
Copy the code

The node is re-elected.

8, Nginx unified management

[rnginx - 1.15.2 conf]# vim nginx.confStream {upstream zkcluster {server 192.168.72.133:2181; Server 192.168.72.136:2181; Server 192.168.72.136:2181; } server { listen 2181; proxy_pass zkcluster; }}Copy the code

Service node listening

1. Fundamentals

In a distributed system, there can be multiple primary nodes that can be dynamically connected to and offline. Any client can sense the offline of the primary node server in real time.

Process Description:

  • Start the Zookeeper cluster service.
  • RegisterServer simulates server registration;
  • ClientServer simulates client listening;
  • Start the server to register the ZK-Node service of different nodes for three times.
  • Shut down the registered server one by one to simulate the service offline process;
  • You can view client logs to monitor changes of service nodes.

Start by creating a node, serverList, to hold the list of servers.

[zk: 0] create /serverList "serverList" 
Copy the code

2. Register the server

package com.zkper.cluster.monitor;
import java.io.IOException;
import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooKeeper;
import org.apache.zookeeper.ZooDefs.Ids;

public class RegisterServer {

    private ZooKeeper zk ;
    private static final String connectString = "127.0.0.133:2181,127.0.0.136:2181,127.0.0.137:2181";
    private static final int sessionTimeout = 3000;
    private static final String parentNode = "/serverList";

    private void getConnect(a) throws IOException{
        zk = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
            @Override
            public void process(WatchedEvent event) {}}); }private void registerServer(String nodeName) throws Exception{
        String create = zk.create(parentNode + "/server", nodeName.getBytes(),
                                  Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
        System.out.println(nodeName +"Go online:"+ create);
    }

    private void working(a) throws Exception{

        Thread.sleep(Long.MAX_VALUE);
    }

    public static void main(String[] args) throws Exception {
        RegisterServer server = new RegisterServer();
        server.getConnect();
        // Start the service three times, register different nodes, shut down different servers again to see the effect of the client
        // server.registerServer("zk-node-133");
        // server.registerServer("zk-node-136");
        server.registerServer("zk-node-137"); server.working(); }}Copy the code

3. Client monitoring

package com.zkper.cluster.monitor;
import org.apache.zookeeper.*;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class ClientServer {
    private ZooKeeper zk ;
    private static final String connectString = "127.0.0.133:2181,127.0.0.136:2181,127.0.0.137:2181";
    private static final int sessionTimeout = 3000;
    private static final String parentNode = "/serverList";

    private void getConnect(a) throws IOException {
        zk = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
            @Override
            public void process(WatchedEvent event) {
                try {
                    // Listen for a list of online services
                    getServerList();
                } catch(Exception e) { e.printStackTrace(); }}}); }private void getServerList(a) throws Exception {
        List<String> children = zk.getChildren(parentNode, true);
        List<String> servers = new ArrayList<>();
        for (String child : children) {
            byte[] data = zk.getData(parentNode + "/" + child, false.null);
            servers.add(new String(data));
        }
        System.out.println("Current Service List:"+servers);
    }

    private void working(a) throws Exception{
        Thread.sleep(Long.MAX_VALUE);
    }

    public static void main(String[] args) throws Exception {
        ClientServer client = newClientServer(); client.getConnect(); client.getServerList(); client.working(); }}Copy the code

Source code address

Making address GitEE, https://github.com/cicadasmile/data-manage-parent, https://gitee.com/cicadasmile/data-manage-parentCopy the code

Recommended reading: Data and Architecture management

The serial number The title
A01 Data source management: master and slave library dynamic routing, AOP mode read and write separation
A02 Data source management: ADAPTS and manages dynamic data sources based on the JDBC pattern
A03 Data source management: dynamic permissions verification, table structure and data migration process
A04 Data source management: relational database and table, column database and distributed computing
A05 Data source management: PostGreSQL environment integration, JSON applications
A06 Data source management: Synchronous data and source code analysis based on DataX components
C01 Architecture foundation: single service. Clustering. Distribution, basic differences and connections
C02 Architecture design: global ID generation strategy in distributed business system