The environment that

  • Based on apache-Zookeeper-3.5.7
  • centos7
  • Official website download address

Cluster planning

  • After that, the configuration of each server is written almost exactly the same, except for the myID file content
hadoop300 hadoop301 hadoop302
zookeeper V V V

The installation

1. Prepare the installation package

  • Once downloaded, unzip it to a location on each server.

Like Hadoop300, other cluster servers

[hadoop@hadoop300 app]$ pwd
/home/hadoop/app
[hadoop@hadoop300 app]$Ll the total amount0
lrwxrwxrwx. 1 hadoop hadoop 44 1month13 23:24 jdk -> /home/hadoop/app/manager/jdk_mg/jdk1.8.0_212
drwxrwxr-x. 6 hadoop hadoop 72 1month14 19:16 manager
lrwxrwxrwx. 1 hadoop hadoop 64 1month14 17:24 zookeeper -> /home/hadoop/app/manager/zookeeper_mg/apache-zookeeper- 3.5.7-bin
Copy the code

2. Configure zK environment variables

Modify the vim ~/.bash_profile file to add the following, which must be configured for each server

# ================= Zookeeper ===============
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
Copy the code

Create myid file

  • All three servers need to be created
  • In the first${ZOOKEEPER_HOME}Create a data(whatever name you want) directory under/to store myid files (Uniquely identifies a ZK node) and other ZK data
  • And then write the unique node ID in myID file,Hadood300 to 1.Hadoop301 to 2.Hadoop302 for 3

The following are the operations of HaoOP300 nodes. The same for other servers. The difference is that the contents of myID files are different.

[hadoop@hadoop300 zookeeper]$ pwd
/home/hadoop/app/zookeeper
[hadoop@hadoop300 zookeeper]$ mkdir data
[hadoop@hadoop300 zookeeper]$ cd data
[hadoop@hadoop300 data]$ echo 1 >> myid
[hadoop@hadoop300 data]$Ll the total amount8
-rw-rw-r-.1 hadoop hadoop   2 1month14 18:17 myid
[hadoop@hadoop300 data]$ cat myid 
1
Copy the code

Modify the zoo. CFG configuration file

  • The three servers need to be synchronized
[hadoop@hadoop300 conf]$ pwd
/home/hadoop/app/zookeeper/conf
[hadoop@hadoop300 conf]$ cp zoo_sample.cfg zoo.cfg
[hadoop@hadoop300 conf]$Ll the total amount16
-rw-r--r-.1 hadoop hadoop  535 5month4 2018 configuration.xsl
-rw-r--r-.1 hadoop hadoop 2712 2month7 2020 log4j.properties
-rw-r--r-.1 hadoop hadoop  922 1month14 17:27 zoo.cfg
-rw-r--r-.1 hadoop hadoop  922 2month7 2020 zoo_sample.cfg
[hadoop@hadoop300 conf]$ vim zoo.cfg
Copy the code

The modifications are as follows

#Path to the data directory created in the previous step, 
dataDir=/home/hadoop/app/zookeeper/data
#Zk cluster IP address list
#1. The.1,.2,.3 here represent the contents of the myID file, corresponding to the server
#2. Port 2888 is used for TCP interaction between the leader and the flower
#3. Port 3888 is used to communicate with each other during the leader election after the leader is suspended
server.1=hadoop300:2888:3888
server.2=hadoop301:2888:3888
server.3=hadoop302:2888:3888
Copy the code

5, ZK cluster unified start/stop script

[hadoop@hadoop300 shell]$ pwd
/home/hadoop/shell
[hadoop@hadoop300 shell]$ vim zk.sh
[hadoop@hadoop300 shell]$ chmod ug+x zk.sh 
[hadoop@hadoop300 shell]$ ll
-rwxrwxr-.1 hadoop hadoop 464 1month14 17:50 zk.sh
Copy the code

The contents of zk.sh are as follows [Tip]

  • Because zK environment variables are configured to~/.bash_profileSo it can be used directlyzkServer.shOf course, you can also write the absolute path of the script.
  • SSH remote execution commands do not load this by defaultbash_profileEnvironment variables, so manually source below, see detailsSSH connection to the remote host to execute the script environment variable problem
#! /bin/bash

# cluster list
list=(hadoop300 hadoop301 hadoop302)

case The $1 in
"start") {for i in ${list[@]}
        do
          echo ---------- zookeeper[ $i] Start ------------ SSH$i "source ~/.bash_profile; zkServer.sh start"
        done
};;
"stop") {for i in ${list[@]}
        do
          echo ---------- zookeeper[ $i] Stop ------------ SSH$i "source ~/.bash_profile; zkServer.sh stop"
        done
};;
"status") {for i in ${list[@]}
        do
          echo ---------- zookeeper [$i] status ------------ SSH$i "source ~/.bash_profile; zkServer.sh status"
        done
};;
Copy the code

Then mount the script to the global call, configuration environment variables or soft connection to the system bin directory

6. Start the test

  • You can see that 301 is elected leader
# start
[hadoop@hadoop300 data]$ zk.sh start
---------- zookeeper[ hadoop300] Start ------------ ZooKeeper JMX Enabled by defaultUsingconfig: /home/hadoop/app/zookeeper/bin/.. /conf/zoo.cfg Starting zookeeper ... STARTED ---------- zookeeper[hadoop301] Start ------------ ZooKeeper JMX Enabled by defaultUsingconfig: /home/hadoop/app/zookeeper/bin/.. /conf/zoo.cfg Starting zookeeper ... STARTED ---------- zookeeper[hadoop302] Start ------------ ZooKeeper JMX Enabled by defaultUsingconfig: /home/hadoop/app/zookeeper/bin/.. /conf/zoo.cfg Starting zookeeper ... STARTED# check status
[hadoop@hadoop300 data]$ zk.sh status
---------- zookeeper [hadoop300] status ------------ ZooKeeper JMX enabled by defaultUsingconfig: /home/hadoop/app/zookeeper/bin/.. /conf/zoo.cfg Client port found:2181. Client address: localhost.
Mode: follower
---------- zookeeper [hadoop301] status ------------ ZooKeeper JMX enabled by defaultUsingconfig: /home/hadoop/app/zookeeper/bin/.. /conf/zoo.cfg Client port found:2181. Client address: localhost.
Mode: leader
---------- zookeeper [hadoop302] status ------------ ZooKeeper JMX enabled by defaultUsingconfig: /home/hadoop/app/zookeeper/bin/.. /conf/zoo.cfg Client port found:2181. Client address: localhost.
Mode: follower

The zK process is called QuorumPeerMain
[hadoop@hadoop300 ~]$ xcall jps
--------- hadoop300 ----------
1749 QuorumPeerMain
1813 Jps
--------- hadoop301 ----------
1623 QuorumPeerMain
1690 Jps
--------- hadoop302 ----------
1687 Jps
1627 QuorumPeerMain
Copy the code