Background Originally when LEARNING ZK, I set up a pseudo cluster locally. Although there is no problem in using it, I always feel that it is troublesome to deploy it. I happened to find out that ZK already had a Docker mirror, so I tried it out and found it was really cool, just a few commands can build a whole ZK cluster. Let me briefly record some steps to build a ZK cluster using Docker.

There are quite a few ZK images on hub.docker.com, but for the sake of stability, let’s use the official ZK image. Run the following command:

Docker pull zookeeper

If the following information is displayed, the image download is complete:

>>> docker pull zookeeper


Using default tag: latest
latest: Pulling from library/zookeeper
 
e110a4a17941: Pull complete
a696cba1f6e8: Pull complete
bc427bd93e95: Pull complete
c72391ae24f6: Pull complete
40ab409b6b34: Pull complete
d4bb8183b85d: Pull complete
0600755f1470: Pull complete
Digest: sha256:12458234bb9f01336df718b7470cabaf5c357052cbcb91f8e80be07635994464
Status: Downloaded newer image for zookeeper:latest
Copy the code

  

Basic use of ZK images to start ZK images

>>> docker run --name my_zookeeper -d zookeeper:latest
Copy the code

  

This command runs a ZooKeeper container in the background named my_zookeeper, and it exports port 2181 by default. Next we use:

Docker logs -f my_zookeeper

The ZK command displays information similar to the following, indicating that the ZK is successfully started:

>>> docker logs -fmy_zookeeper ZooKeeper JMX enabled by default Using config: /conf/zoo.cfg ... The 2016-09-14 06:40:03, 445 [myid:] - INFO [main: NIOServerCnxnFactory @ 89] - binding to the port 0.0.0.0/0.0.0.0:2181 ` ` ` & emsp;   Use the ZK command line client to connect to ZK because the ZK container we just started is not bound to the host port, so we cannot access it directly. However, we can access the ZK container through the Link mechanism of Docker. Run the following command: docker run it --rm --link my_zookeeper:zookeeper zookeeper zkcli. sh -server zookeeper    If you are familiar with Docker, you will be familiar with the command above. This command is used to start a ZooKeeper image and run the zkcli. sh command in the image"-server zookeeper"Connect the container we started earlier named my_zookeeper to our new container and name it zookeeper. After executing this command, we can operate the ZK service as normal using the ZK command line client. Because it is too cumbersome to start the ZK cluster one by one, I use Docker-compose to start the ZK cluster directly for convenience. First create a file called docker-comemage.yml, which looks like this:Copy the code

version: '2'
services:
    zoo1:
        image: zookeeper
        restart: always
        container_name: zoo1
        ports:
            - "2181:2181"
        environment:
            ZOO_MY_ID: 1
            ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
 
    zoo2:
        image: zookeeper
        restart: always
        container_name: zoo2
        ports:
            - "2182:2181"
        environment:
            ZOO_MY_ID: 2
            ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
 
    zoo3:
        image: zookeeper
        restart: always
        container_name: zoo3
        ports:
            - "2183:2181"environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888    Copy the code

This configuration file tells Docker to run three ZooKeeper images and bind local ports 2181, 2182, and 2183 to port 2181 of the corresponding container. ZOO_MY_ID and ZOO_SERVERS are two environment variables to be set up in a ZK cluster. ZOO_MY_ID indicates the ID of the ZK service. It is an integer ranging from 1 to 255 and must be unique in the cluster. ZOO_SERVERS is the host list of the ZK cluster.

Docker-comemage.yml: docker-comemage.yml: docker-comemage.yml: docker-comemage.yml

COMPOSE_PROJECT_NAME = zk_test docker – compose up

The ZK cluster is ready to start. Docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose

COMPOSE_PROJECT_NAME=zk_test docker-compose ps Name Command State Ports


zoo1 /docker-entrypoint.sh zkSe … Up 0.0.0.0:2181->2181/ TCP zoo2 /docker-entrypoint.sh zkSe… Up 0.0.0.0:2182->2181/ TCP zoo3 /docker-entrypoint.sh zkSe… The Up 0.0.0.0:2183 – > 2181 / TCP

Note that we added the COMPOSE_PROJECT_NAME=zk_test environment variable to both “docker-compose up” and “docker-compose PS “to give our compose project a name. To avoid confusion with other compose.

Docker-compose ps: docker-compose docker-compose ps: docker-compose ps: docker-compose ps: docker-compose ps: docker-compose ps: docker-compose ps: docker-compose ps: docker-compose ps: docker-compose ps: docker-compose ps: docker-compose PS: docker-compose PS

docker run -it –rm –link zoo1:zk1 –link zoo2:zk2 –link zoo3:zk3 –net zktest_default zookeeper zkCli.sh -server Zk1:2181, zk2:2181, zk3:2181

Zoo1, zoo2, and zoo3 are mapped to localhost ports 2181, 2182, and 2183 respectively, so we can use the following command to connect to the ZK cluster:

ZkCli. Sh – server localhost: 2181, localhost: 2182, localhost: 2183

We can connect to the specified ZK server by using the nc command, and then send stat to check the status of the ZK service, for example:

>>> echo stat| nc 127.0.0.1 2181 Zookeeper version: 3.4.9-1757313, built on 08/23/2016 06:50 GMT Clients: /172.18.0.1:49810[0](queued=0,recved=1,sent=0) Latency min/ AVg/Max: 5/39/74 Received: 4 sent: 3 Connections: 1 Outstanding: 0 Zxid: 0x200000002 Mode: follower Node count: 4 >>>echo stat| nc 127.0.0.1 2182 Zookeeper version: 3.4.9-1757313, built on 08/23/2016 06:50 GMT Clients: /172.18.0.1:50870[0](queued=0,recved=1,sent=0) Latency min/ AVg/Max: 0/0/0 Received: 2 sent: 1 Connections: 1 Outstanding: 0 Zxid: 0x200000002 Mode: follower Node count: 4 >>>echo stat| nc 127.0.0.1 2183 Zookeeper version: 3.4.9-1757313, built on 08/23/2016 06:50 GMT Clients: /172.18.0.1:51820[0](queued=0,recved=1,sent=0) Latency min/ AVg/Max: 0/0/0 Received: 2 sent: 1 Connections: 1 Outstanding: 0 Zxid: 0x200000002 Mode: leader Node count: 4    Copy the code

From the above output, we can see that zoo1 and Zoo2 are followers, while Zoo3 is the leader, thus proving that our ZK cluster is indeed built.