Introduction to the

Kafka was originally developed by LinkedIn as a multi-partitioned, multi-replica, zooKeeper-coordinated distributed messaging system in Scala and has been donated to the Apache Foundation. At present, Kafka has been positioned as a distributed streaming processing platform, which is widely used for high throughput, persistence, horizontal expansion, support for streaming data processing and other features.

Prior to release 0.10, Kafka was primarily a distributed, high-throughput, low-latency messaging engine.

Since version 0.10, Kafka has provided connectors (Kafka Connect) and streams (Kafka Stream), and positioning has changed from a messaging engine to a streaming platform. Pulsar is another popular streaming platform. Comparisons between Pulsar and Kafka are also talked about, mostly in terms of performance, architecture, and features.

Some important concepts in Kafka

  • Producer: A message Producer, a client that sends messages to Kafka Broker.
  • Consumer: Message Consumer, the client that fetches messages from Kafka Broker.
  • Consumer Group: Consumer Group (CG). Each Consumer in the Consumer Group is responsible for consuming data of different regions to improve consumption power. A zone can only be consumed by one consumer in a group. Consumer groups do not affect each other. All consumers belong to a consumer group, that is, a consumer group is logically a subscriber.
  • Broker: A Kafka machine is a Broker. A cluster consists of multiple brokers. A single Broker can hold multiple topics.
  • Topic: Understood as a queue, topics categorize messages, and producers and consumers are oriented to the same Topic.
  • Partition: To achieve scalability and concurrency, a very large Topic can be distributed across multiple brokers (servers). A Topic can be divided into multiple partitions, each of which is an ordered queue.
  • Up: Replication. To back up data, Kafka provides a replication mechanism. Each Partition in a Topic has several replicas, one Leader, and several followers.
  • Leader: The “master” copy of multiple copies per partition, objects that producers send data to, and objects that consumers consume data to, are all leaders.
  • Follower: A “slave” copy of multiple copies in each partition that synchronizes data with the Leader in real time. When the Leader fails, a Follower becomes the new Leader.
  • Offset: the location information of consumer consumption. It monitors the location of data consumption. When the consumer hangs up and resumes, it can continue consumption from the location.
  • Zookeeper: A Kafka cluster can work properly only when Zookeeper helps Kafka store and manage cluster information.

Kafka principle

Controller election and recovery

The controller is one of the core components of Kafka. Its main role is to coordinate and manage the entire Kafka cluster with the help of ZooKeeper. Kafka uses ZooKeeper’s leadership election mechanism, where each Broker campaigns for the master controller, but only one Broker becomes the master controller.

The controller has the following responsibilities:

  • Listen for partition-specific changes, such as fine-grained allocation of existing subject partitions by running the kafka-reassignment-partitions
  • Listen for topic-related changes
  • Listen for changes related to the broker

Controller election: Each Broker serves as a ZooKeeper client and attempts to create /controller temporary nodes from the ZooKeeper server, but only one Broker can successfully create temporary nodes. Because the /controller node is a temporary node, the temporary node is deleted when the primary controller fails or the session fails. At this point all brokers will re-elect the Leader, attempting to create /controller temporary nodes.

The Kafka controller stores Broker node information on the ZooKeeper/Controller node. Each Broker stores the brokerID value of the current controller in memory, which can be identified as activeControllerId. Each broker also adds listeners to the /controller node to listen for changes to the node’s data.

Each broker updates the activeControllerId stored in its own memory when the data in the/Controller node changes. If the broker was a controller before the data change and its own brokerID value is inconsistent with the new activeControllerId value after the data change, then it needs to “retire” and close the appropriate resource. The controller may be offline due to an exception, causing the temporary controller /controller to be deleted automatically. It is also possible that the node was deleted for other reasons.

Each broker is elected when the /controller node is deleted. A new election can be triggered manually by dropping the/Controller node if necessary, although closing the broker corresponding to the controller and manually writing the data corresponding to the new BrokerID to the/Controller node will also trigger a new election.

Election of the zone leader

The election of the leader replica of a partition is implemented by the Kafka Controller. When a zone is created (creating a theme or adding a zone involves creating a zone) or when a zone goes online (for example, the original leader copy in the zone goes offline, and a new leader needs to be elected to provide services externally), the leader election action needs to be performed.

The basic idea is to find the first surviving copy in the order of the copies in the AR collection, and that copy is in the ISR collection. The AR collection of a partition is specified at allocation time, and the order of the copies within the collection remains the same as long as no redistribution occurs, while the order of the copies in the ISR collection of a partition may change. Note that the election is done in order of AR, not in order of ISR. For example, there are three nodes in the cluster: Broker0, Broker1, Broker2, which at one point have three partitions and a topic with a copy factor of 3

Details for QuickStart are as follows:

Broker0 is closed at this time, and for partition 2, the live AR becomes [1,2] and the ISR becomes [2,1]. When you look at the details of the topic QuickStart, the leader of partition 2 becomes 1 instead of 2.

If no copy available in ISR collection, so at this time also need to check the configuration of unclean. Leader. Election. Enable reference number (defaults to false). If this parameter is set to true, the leader is allowed to be elected from the non-ISR list. The leader is the leader if the first surviving replica is found from the AR list.

The leader election action also needs to be performed when the partition is redistributed. The election strategy is simple: find the first surviving copy from the reassigned AR list that is in the current ISR list. When the election of the priority copy occurs, the leader can be directly set as the priority copy, and the first copy in the AR set is the priority copy.

In another case, when a node is gracefully shut down (i.e., ControlledShutdown), all leader replicas on that node go offline, so the corresponding partition needs to perform leader elections. The idea here is to find the first surviving replica from the AR list that is currently in the ISR list, while ensuring that the replica is not on the node that is being closed.

This is the core concept of Kafka. In the next article, we will bring you the introduction of Kafka partition allocation strategy.

To learn more

Cloud Wisdom is an open source integrated Operation Management Platform (OMP) that integrates lightweight, converged, and intelligent Operation and maintenance. It provides functions such as Management, deployment, monitoring, inspection, self-healing, backup, and recovery, providing convenient Operation and maintenance capabilities and service Management for users. In addition to improving the efficiency of operation and maintenance personnel, it greatly improves business continuity and security. Click on the link below to like OMP and send it to star to learn more

GitHub address: github.com/CloudWise-O…

Gitee address: gitee.com/CloudWise/O…