Zookeeper is an Apache open source distributed framework that provides coordination services for distributed applications.

Zookeeper stores and manages the data that people care about. Once the status of the data changes, Zookeeper notifies the services registered with Zookeeper. Zookeeper = file system + notification.

A Zookeeper data structure

The data structure of Zookeeper is similar to that of a Unix file system. It can be regarded as a tree on the whole. Different from a Unix file system, each Zookeeper node can store data, and each Zookeeper node is called a ZNode.

1.1 Four Types of ZNodes

  • Persistent directory node: this node still exists after the client disconnects from Zookeeper.
  • Persistent Sequential numbering Directory node: After the client disconnects from Zookeeper, the node still exists but is numbered by the node name given by Zookeeper.
  • Temporary directory node: this node is deleted after the client is disconnected from Zookeeper.
  • Temporary sequential numbering Directory node: After the client is disconnected from Zookeeper, the node is deleted. The node is numbered sequentially based on the name given by Zookeeper.

Note: When creating a ZNode, set the sequence identifier. The ZNode name will be followed by a value. The sequence number is a monotonically increasing counter maintained by the parent node.

1.2 STAT structure

A ZNode contains the following information:

  • Czxid – transaction zxID for creating a node:

Each time you change the ZooKeeper status, you will receive a timestamp in the form of an ZXID, which is the ZooKeeper transaction ID.

The transaction ID is the total order of all changes in ZooKeeper. Each modification has a unique zxID, and if ZXID1 is less than zxid2, then zxid1 occurs before zxid2.

  • Ctime: the number of milliseconds in which zNode was created (since 1970)

  • Mzxid: indicates the latest transaction zxID of the Znode

  • Mtime: the number of milliseconds zNode last modified (since 1970)

  • PZxid: indicates the zxID of the latest znode

  • Cversion: indicates the change number of a znode child node

  • Dataversion: Znode data change number

  • AclVersion: indicates the change number of the ZNode ACL

  • EphemeralOwner: If it is a temporary node, this is the session ID of the ZNode owner. If not temporary section

Point is zero

  • DataLength: dataLength of znode

  • NumChildren: indicates the number of znode children

2 Application scenarios of Zookeeper

Zookeeper can be used in unified naming service, unified configuration management, unified cluster management, and dynamic online and offline of server nodes.

2.1 Unified Naming Service

In a distributed environment, often need to unified naming service, if there is a service deployed 2 two copies, direct call specific services there must be some not appropriate, because we don’t know which service can deal with our request faster, then we can put the three unified naming service, and then its internal to the load. Then the optimal service can be invoked.

2.2 Unified Configuration Management

In a distributed environment, configuration file synchronization can be implemented by Zookeeper.

  1. Write the configuration file to a ZNode of Zookeeper
  2. Each client service listens on the ZNode
  3. When ZNode changes, Zookeeper notifies each client service

2.3 Unified Cluster Management

Zookeeper can monitor node status changes in real time. When a three-node service is running, if the other one is down, the other two nodes can receive messages immediately to achieve real-time monitoring. These three nodes are written into a ZNode of Zookeeper, and each node listens to the ZNode. When the ZNode changes, these nodes can receive the change status in real time.

The principle of the listener

  1. Create a Main() thread
  2. Create two threads in the Main() thread, one for connect and one for listener
  3. Registered listening events are sent to Zookeeper via the Connect thread
  4. Add the registered listener events to the list of registered listeners for Zookeeper
  5. Zookeeper sends this message to the Listener thread when it detects data or path changes
  6. The Listener thread calls the process() method internally

Three Zookeeper cluster

The Zookeeper cluster does not specify a Master or Slave. However, when Zookeeper works, a Leader node is generated through the internal election mechanism, and the other nodes are followers or observers.

A node declared as an Observer that does not participate in either the election process or the write success policy.

Half-write success policy: After the Leader node receives a write request, the Leader node broadcasts the write request to each server. Each server adds the write request to the queue to be written and sends a success message to the Leader. When the Leader node receives more than half of the success messages, the write operation can be executed. The Leader sends a commit message to each server, and each server begins to write after receiving the message.

Followers and Observers only provide data reads, and when they receive a write request, they forward the request to the Leader node.

The Zookeeper cluster can run normally as long as more than half of the nodes in the cluster are alive. Therefore, a Zookeeper cluster is suitable for installing an odd number of machines.

3.1 Election Mechanism

(1) Server 1 starts and initiates an election. Server 1 votes for itself. At this point, server 1 has one vote, less than half (3 votes), the election cannot be completed, and the state of server 1 remains LOOKING.

(2) Server 2 is started to initiate another election. Server 1 and server 2 vote for each other and exchange vote information: server 1 finds that server 2’s ID is larger than the one it currently votes for (server 1) and changes the vote to server 2. At this time, the number of votes for server 1 is 0, and the number of votes for server 2 is 2. If there is no more than half of the results, the election cannot be completed.

(3) Server 3 starts and initiates an election. Both servers 1 and 2 change the vote to server 3. The result of this vote: 0 votes for server 1, 0 votes for server 2, 3 votes for server 3. At this point, the number of votes of server 3 exceeds half, and server 3 is elected as the Leader. Change the status of server 1,2 to FOLLOWING, and server 3 to LEADING.

(4) Server 4 starts and initiates an election. At this point, servers 1, 2 and 3 are no longer LOOKING and will not change the ballot information. Result: 3 votes for server 3, 1 vote for server 4. At this point, server 4 follows the majority, changes the vote information to server 3, and changes the status to FOLLOWING;

(5) Server 5 starts, just like server 4.


Focus, don’t get lost

If you think the article is good, please pay attention to it, like it, and save it. Your support is the motivation for my creation. Thank you all.

If there is a problem with the article, please don’t be stingy, welcome to point out the message, I will check and modify in time.

If you want to know more about me, you can follow me by searching “Java Journey” on wechat. Reply “1024” to get the learning video and beautiful ebook. Push technical articles at 7:30 every day on time, so that your way to work is not lonely, and there are monthly book activities to help you improve your hard power!