Kafka is a very popular messaging middleware at the moment, with thousands of companies using it, according to its website. I’ve been practicing a wave of Kafka recently and it’s really good and powerful. Today we’ll look at three aspects of Kafka: the installation of Kafaka on Linux, the visual tools for Kafka, and the combination of Kafka and SpringBoot. Hope you can get up to speed on Kafka and master this popular messaging middleware!

Mall SpringBoot practical electricity project (40 k + star) address: https://github.com/macrozheng/mall

Kafka profile

Kafka is an open source distributed messaging platform developed by LinkedIn Inc., written in Scala and Java. Its main function is to provide a unified, high throughput, low latency platform for real-time data processing. Its essence is a message engine system based on publish and subscribe mode.

Kafka has the following characteristics:

  • High throughput, low latency: Kafka sends and receives messages very fast, and message latency can be as low as 2ms using cluster processing.
  • Highly Scalable: Kafka scales and shrinks flexibly to thousands of brokers, hundreds of thousands of Partitions, and handles trillions of messages per day.
  • Persistent storage: Kafka securely stores data in a distributed, persistent, fault-tolerant cluster.
  • High availability: Kafka can effectively extend the cluster over the available area, and the cluster can still function if a node goes down.

Kafka installation

We will use the installation method under Linux, and the installation environment is CentOS 7.6. Docker is not used to install and deploy here, I feel it is easier to install directly (mainly because the official Docker image is not provided)!

  • First we need to download the installation package of Kafka, download address: https://mirrors.bfsu.edu.cn/a…

  • Unzip Kafka to the specified directory after downloading:
CD /mydata/kafka/ tar-xzf kafka_2.13-2.8.0.tgz
  • After the decompression is completed, enter the decompression directory:
CD kafka_2. 13-2.8.0
  • Although Kafka is reportedly removing ZooKeeper, it has not been removed in the latest version of Kafka, so you need to start ZooKeeper before you can start Kafka.

  • Start the ZooKeeper service, which will run on2181Port;
# run service in background, And the log output to zookeeper - out under the current folder. The file file nohup bin/zookeeper - server - start. Sh config/zookeeper properties > zookeeper - out. The file 2 > &1 &
  • Since Kafka is currently deployed on a Linux server, the external network needs to modify the configuration file of Kafka if it wants to access itconfig/server.properties, change the listening address of Kafka, otherwise it will be unable to connect;
############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT: / / your host. Name: 9092 listeners = PLAINTEXT: / / 192.168.5.78:9092
  • Finally start the Kafka service, which will run on9092Port.
# run service in background, Nohup bin/kafka-server-start.sh config/server.properties > kafka-out.file 2>&1 &

Kafka command-line operations

Next, let’s use the command line to manipulate Kafka, to familiarize ourselves with Kafka.

  • I’m going to create one calledconsoleTopicThe Topic;
Bin /kafka-topics. Sh --create --topic consoleTopic --bootstrap-server 192.168.5.78:9092
  • Next, look at Topic;
Bin /kafka-topics. Sh --describe --topic consoleTopic --bootstrap-server 192.168.5.78:9092
  • The following Topic information is displayed;
Topic: consoleTopic    TopicId: tJmxUQ8QRJGlhCSf2ojuGw    PartitionCount: 1    ReplicationFactor: 1    Configs: segment.bytes=1073741824
    Topic: consoleTopic    Partition: 0    Leader: 0    Replicas: 0    Isr: 0
  • Sending a message to a Topic:
Bin /kafka-console-producer.sh --topic consoleTopic --bootstrap-server 192.168.5.78:9092
  • Directly in the command line input information can be sent;

  • Open a new window to retrieve messages from the Topic with the following command:
Bin /kafka-console-consumer.sh --topic consoleTopic --from -- beginning --bootstrap-server 192.168.5.78:9092

Kafka visualization

Using the command line to manipulate Kafka is a bit of a hassle, so let’s try the visualization tool
kafka-eagle.

Install the JDK

If you are using CentOS, you do not have the full JDK installed by default. You need to install it yourself!

  • Download the JDK 8, download address: https://mirrors.tuna.tsinghua…

  • After downloading, unzip the JDK to the specified directory.
CD /mydata/ Java tar-zxvf openJDK8u-jdk_x64_linux_xx.tar.gz mv openJDK8u-jdk_x64_linux_xx.tar.gz jdk1.8
  • in/etc/profileAdd environment variables to the fileJAVA_HOME.
/mydata/ Java /jdk1.8 export PATH=$PATH:$JAVA_HOME/bin #  . /etc/profile

The installationkafka-eagle

  • downloadkafka-eagleInstallation package, download address:https://github.com/smartloli/…

  • When the download is complete, it willkafka-eagleUnzip to the specified directory;
CD /mydata/kafka/ tar-zxvf kafka-eagle-web-2.0.5-bin.tar.gz
  • in/etc/profileAdd environment variables to the fileKE_HOME;
/mydata/kafka/kafka-eagle-web-2.0.5 export PATH=$PATH:$KE_HOME/bin # Enable the modified profile to take effect. /etc/profile
  • Install MySQL and add the databaseke.kafka-eagleI’ll use it later;
  • Modifying configuration files$KE_HOME/conf/system-config.properties“, mainly to modify the ZooKeeper configuration and database configuration, comment out the SQLite configuration, use MySQL instead;
###################################### # multi zookeeper & kafka cluster list ###################################### kafka.eagle.zk.cluster.alias=cluster1 cluster1.zk.list=localhost:2181 ###################################### # kafka eagle webui port ###################################### kafka.eagle.webui.port=8048 ###################################### # kafka sqlite jdbc driver address ###################################### # kafka.eagle.driver=org.sqlite.JDBC # kafka.eagle.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db # kafka.eagle.username=root # kafka.eagle.password=www.kafka-eagle.org ###################################### # kafka mysql jdbc driver address ###################################### kafka.eagle.driver=com.mysql.cj.jdbc.Driver kafka.eagle.url=jdbc:mysql://localhost:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNul l kafka.eagle.username=root kafka.eagle.password=root
  • Start with the following commandkafka-eagle;
$KE_HOME/bin/ke.sh start
  • After the command is executed, the following information will be displayed, but it does not mean that the service has been started successfully, and it needs to wait for a while.

  • Let me introduce a few more useful oneskafka-eagleCommand:
$KE_HOME/bin/ke.sh $KE_HOME/bin/ke.sh $KE_HOME/bin/ke.sh $KE_HOME/bin/ke.sh $KE_HOME/bin/ke.sh $KE_HOME/bin/ke $KE_HOME/logs/ke_console.out ($KE_HOME/logs/ke_console.out ($KE_HOME/logs/ke_console.out)
  • Start successful can directly access, enter the account passwordadmin:123456, access address:http://192.168.5.78:8048/

  • After logging in successfully, you can access the Dashboard, the interface is still great!

Use of Visual Tools

  • We used the command line to create the Topic before. Here we can directly create the Topic through the interface.

  • We can also go straight throughkafka-eagleTo send a message;

  • We can consume the messages in the Topic from the command line;
Bin /kafka-console-consumer.sh --topic testTopic --from -- beginning --bootstrap-server 192.168.5.78:9092
  • The information obtained by the console is shown as follows;

  • There’s also a very interesting feature calledKSQL, you can query the message in Topic by SQL statement;

  • Visualization tools naturally require monitoring, if you want to turn it onkafka-eagleFor the monitoring function of Kafka, the startup script of Kafka needs to be modified to expose the JMX port;
Vi kafka-server-start.sh # expose JMX port if ["x$KAFKA_HEAP_OPTS" = "x"]; then export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70" export JMX_PORT="9999" fi
  • Take a look at the monitor chart interface;

  • There is also a very coquettish monitoring screen function;

  • There are ZooKeeper command line functions, in short, the function is very complete, very powerful!

Kafka SpringBoot integration

It is also very simple to manipulate Kafka in SpringBoot. For example, Kafka’s message mode is very simple, with no queues, only topics.

  • First in the applicationpom.xmlAdd a Spring Kafka dependency to
<! - Spring integration Kafka - > < the dependency > < groupId > org. Springframework. Kafka < / groupId > < artifactId > Spring - Kafka < / artifactId > The < version > 2.7.1 < / version > < / dependency >
  • Modify the application configuration fileapplication.ymlConfigure the Kafka service address and the Consumer’sgroup-id;
Server: port: 8088 Spring: kafka: bootstrap-servers: '192.168.5.78:9092' Consumer: group-id: "bootGroup"
  • Create a producer to send messages to Kafka’s Topic.
/** ** Kafka * Created by Macro on 2021/5/19. */ @Component public class KafkaProducer {@Autowired private KafkaTemplate kafkaTemplate; public void send(String message){ kafkaTemplate.send("bootTopic",message); }}
  • Create a consumer to get messages from Kafka and consume them.
/** * Kafka message consumer * Created by Macro on 2021/5/19. */ @slf4j @Component public class Kafkaconsumer { @KafkaListener(topics = "bootTopic") public void processMessage(String content) { log.info("consumer processMessage : {}",content); }}
  • Create an interface to send messages and call the producer to send messages.
/** * Kafka * Created by Macro on 2021/5/19. */ @API (tags = "Kafkacontroller ", @RequestMapping("/ Kafka ") public class KafkAcontroller {@Autowired private; @RequestMapping() public class KafkAcontroller {@Autowired private KafkaProducer kafkaProducer; @apiOperation (" Send Message") @requestMapping (value = "/sendMessage", method = RequestMethod.GET) @ResponseBody public CommonResult sendMessage(@RequestParam String message) { kafkaProducer.send(message); return CommonResult.success(null); }}
  • Invoke the interface directly in Swagger for testing;

  • The project console outputs the following message indicating that the message has been received and consumed.
The 16:59:21 2021-05-19. 2344-016 the INFO [ntainer# 0-0 - C - 1] c.m.mall.tiny.com ponent. KafkaConsumer: consumer processMessage : Spring Boot message!

conclusion

Through a wave of practice in this article, you can basically get started on Kafka. Installing, visualizing tools, and combining with SpringBoot are all basically developer related and are a must in learning Kafka.

The resources

  • Kafka’s official documents: https://kafka.apache.org/quic…
  • kafka-eagleOfficial document:http://www.kafka-eagle.org/ar…
  • Kafka related concepts: https://juejin.cn/post/684490…

Project source address

https://github.com/macrozheng…

In this paper, making
https://github.com/macrozheng/mall-learningHas been included, welcome everyone STAR!