Learn more about Java basics


This series of notes is based on the geek time course Kafka Core Technology and Combat

This directory

  • Kafka term
  • Kafa version has evolved
  • On-line deployment
  • Key Cluster Parameters

Kafka term

  • Message: Record. Kafka is a message engine. Messages are the main objects that Kafka processes. Topic: Topics are logical containers that carry messages and are used to distinguish specific businesses in practice.
  • Partition: Partition. An ordered invariable sequence of messages. There can be multiple partitions under each topic.
  • Message displacement: Offset. Represents the location information of each message in the partition, and is a monotonically increasing and constant value. Replica: Replica. In Kafka, the same message can be copied to multiple places to provide data redundancy. These places are called replicas. Replicas are also divided into leader replicas and follower replicas, each with a different role division. Replicas are at the partition level, that is, each partition can be configured with multiple replicas for high availability.
  • Producer: Producer. An application that publishes new messages to a topic.
  • Consumer: Consumer. An application that subscribes to new messages from topics.
  • Consumer Offset. Characterizing consumer consumption progress, each consumer has its own consumer displacement.
  • Consumer Group: Consumer Group. A group of multiple consumer instances consuming multiple partitions simultaneously to achieve high throughput.
  • Broker: The server side of Kafka consists of service processes called brokers. A Kafka cluster consists of multiple brokers that receive and process requests from clients and persist messages.
  • Rebalance. The process by which other consumer instances automatically reassign subscribed topic partitions when a consumer instance in a consumer group fails. Rebalance is an important way to achieve high availability on the Kafka consumer side.

Kafa version has evolved

  • Kafka has evolved into seven major versions: 0.7, 0.8, 0.9, 0.10, 0.11, 1.0, and 2.0.
  • When Kafka evolved from 0.7 to 0.8, the replica mechanism was introduced.
  • The 0.8.2.0 community introduced a new version of the Producer API, which is the Producer whose Broker address needs to be specified. My advice is to use newer versions whenever possible. If you can’t upgrade to a larger version, I also recommend you upgrade to at least 0.8.2.2, where the older consumer API is more stable. Also, don’t use the new Version of the Producer API even if you hit 0.8.2.2, which is still buggy
  • 0.9.0.0 is the new version and the Producer API is relatively stable in this release.
  • 0.10.0.0 was a major milestone release, as it introduced Kafka Streams. If you are still using version 0.10, I strongly recommend that you upgrade to at least 0.10.2.2 and use the new Consumer API. Another thing to mention is that 0.10.2.2 fixes a Bug that could slow down Producer performance. You should also upgrade to 0.10.2.2 for performance reasons.
  • In June 2017, the community released version 0.11.0.0, which introduced two major feature changes: the idempotent Producer API and Transaction API; The other is a reconfiguration of the Kafka message format. If you are still confused about whether version 1.0 is suitable for the online environment, at least upgrade your environment to 0.11.0.3, because the messaging engine features in this version are already very good.
  • The 1.0 and 2.0 versions, in my opinion, are mostly improvements to Kafka Streams and don’t introduce many major features in the messaging engine.

On-line deployment

Key Cluster Parameters

type Parameter names describe note
Broker log.dirs This is an important parameter that specifies the number of file directory paths that the Broker needs to use. How should I set these two parameters? Dirs is the first parameter. Do not set log.dir. And, more importantly, in the online production environment must be for the dirs configure multiple paths, specific format is a CSV format, namely comma-separated multiple paths, than such as/home/kafka1, / home/kafka2, / home/kafka3 this sample. Make sure these directories are mounted on different physical disks if possible. This has two advantages:
log.dir Note that this is dir with no s at the end, which means that it can only represent a single path and is used to supplement the previous parameter. 1. Improved read/write performance: Compared with a single disk, multiple physical disks provide a higher read/write throughput.

2. Failover is implemented. This is a powerful new feature introduced in Kafka 1.1. Remember that in the past, Kafka Broker shut down as soon as any disk it used failed. Since 1.1, however, this has been fixed. Data on a broken disk is automatically transferred to another healthy disk and the Broker still works. Remember our discussion on whether Kafka needs to use RAID? This improvement is the basis of our RAID solution: without this Failover, we can only rely on RAID for support.

zookeeper.connect You can specify the value for zk1:2181, zk2:2181, zk3:2181 If I have multiple Kafka clusters using the same ZooKeeper cluster, how do I set this parameter? That’s where chroot comes in. This chroot is a generalization of ZooKeeper, similar to an alias.

If you have two sets of Kafka clusters, let’s say kafka1 and kafka2, then the zookeeper.connect parameter for both sets of clusters can be specified as follows: Zk2 zk1:2181:2181, zk3:2181 / kafka1 and zk1:2181, zk2:2181, zk3:2181 / kafka2.

listeners A listener is a Kafka service that tells an external connector what protocol to use to access a specified host name and port. Let’s talk more specifically about the concept of a listener. In terms of composition, it is a number of tick-separated triples, each of which is in the format < protocol name, host name, port number >. The protocol name may be a standard name, such as PLAINTEXT for PLAINTEXT transmission, SSL for encrypted transmission using SSL or TLS, etc. It could also be a protocol name you define yourself, such as CONTROLLER: //localhost:9092.
advertised.listeners 12.. There are many explanations for why we are so excited. Advertised means declared or Advertised, meaning that the group of listeners that the Broker advertises are Advertised. Host name is often asked if I want to use an IP address or a host name in this setting. Here’s my general advice:It is best to use all host names, that is, all host names in the Broker and Client application configurations.The Broker source code also uses the host name, so if you use an IP address to connect somewhere, you may fail to connect.

12. Advertised are mainly for online visits Clients Does not need to set this parameter when accessing Kafka from an Intranet environment.

host.name/port Don’t specify values for them at all; after all, they are out of date.
listener.security.protocol.map If you define the protocol name, you must also specify the listener. The security. The protocol. The map parameter tells the agreement of the underlying USES what kind of security protocols, Such as specified listener. Security. The protocol. The map = CONTROLLER: P CONTROLLER LAINTEXT said the custom protocol underlying use plaintext encrypted data.
auto.create.topics.enable Whether to allow automatic creation of topics. I recommend setting it to falseThat is, automatic creation of topics is not allowed. There are a lot of topics with weird names in our online environment, probably because this parameter is set to true.
unclean.leader.election.enable Whether to allow Unclean Leader election. Only those replicas that save a lot of data are eligible to run, and those replicas that are too far behind are not eligible to run for Leader. What if the copies that hold the most data are dead?

If set to false, stick to the previous rule and never allow replicas that are too far behind to run for Leader. The consequence of this is that the partition becomes unavailable because there is no Leader left. If true, Kafka allows you to choose one of the “slow” replicas as the Leader. Since I don’t know what version of Kafka you are using, I recommend that you explicitly set it to false.

auto.leader.rebalance.enable Whether to allow periodic Leader elections. Setting this value to true allows Kafka to periodically reelect the Leader of some Topic partitions, but this reelection is not mineless and only happens if certain conditions are met. Strictly speaking, the biggest difference from the Leader election in the previous parameter is that it does not elect the Leader, but changes the Leader! I recommend that you set this parameter to false in production.
log.retention.{hour|minutes|ms} There are three brothers that control how long a message is stored. In terms of priority, MS is the highest, minutes is the second, and Hour is the lowest. Although the MS setting has the highest priority, it is common to set the hour level more, for example, log.retention. Hour =168 means 7 days of data is saved by default
log.retention.bytes This is the total disk capacity that the specified Broker holds for messages. This value defaults to -1, indicating that you can store as much data as you want on the Broker
message.max.bytes Controls the maximum amount of dissipation that the Broker can receive. The default 1000012 is too small, less than 1MB. Messages that break 1MB are common in real-world scenarios, so setting a large value in an online environment is a safe bet. It is, after all, just a yardstick that measures the maximum size of messages that the Broker can handle, and it doesn’t take much disk space to set it larger.
Topic retention.ms Specifies how long a Topic message should be kept. The default value is 7 days. That is, the Topic stores only the messages of the last 7 days. Once set, it overrides the global parameter values on the Broker side.
retention.bytes Specifies how much disk space to reserve for that Topic. Like a global parameter, this value is usually useful in a multi-tenant Kafka cluster.The current default value is -1, indicating that the disk space can be used indefinitely.
max.message.bytes It determines the maximum message size that Kafka Broker can normally receive for this Topic.
Set the statement when creating the Topic bin/kafka-topics.sh–bootstrap-server localhost:9092–create–topic transaction–partitions1 –replication-factor1 –config retention.ms=15552000000–config max.message.bytes=5242880
Set statements when modifying Topic bin/kafka-configs.sh–zookeeper localhost:2181–entity-typetopics –entity-name transaction–alter–add-config max.message.bytes=10485760
JVM KAFKA_HEAP_OPTS Specify heap size Set your JVM heap size to 6GB, which is currently considered a reasonable value in the industry.
KAFKA_JVM_PERFORMANCE_OPTS Specify GC parameters If you are already using Java 8, use the G1 collector. Without any tuning, G1 performed better than CMS. G1 is the default in JDK9; jdK8 still needs to be explicitly specified
Operating System Parameters ulimit -n File descriptor restrictions It is usually reasonable to set it to a large value, such as ulimit -n 1000000.
File System Type By file system, I mean journaling file systems like ext3, EXT4, or XFS. XFS performs better than ext4, so it’s best to use XFS in production
Swap tuning Many articles on the web have mentioned setting this to 0 and disabling swap completely to prevent Kafka processes from using swap space. I personally think it’s better not to set it to 0, we can set it to a smaller value. Why is that? Because once set to 0, when physical memory runs out, the OS triggers OOM Killer, which randomly selects a process and kills it, without giving the user any warning at all. However, if set to a small value, you will at least see Broker performance start to deteriorate dramatically when you start using swap space, giving you time to adjust and diagnose problems. With this in mind, I personally recommend configuring swappniess to a value close to but not zero, such as 1.
Flush Flush Down time Sending data to Kafka is not considered a success until the data is actually written to disk, but rather as long as the data is written to the operating system’s Page Cache, which the operating system then periodically dumps “dirty” data from the Page Cache onto the physical disk using the LRU algorithm. This interval is determined by the commit time, which is 5 seconds by default. This is often considered too frequent, and you can increase the commit interval appropriately to reduce physical disk writes.