1. Scenario description

Because kafka used not much before, only to write and read data in topic, this time just to use, record kafka cluster construction and combined with springboot use.

2. Solutions

2.1 Introduction

Kafka is often used in big data projects. In our project, kafka is used to receive log flow data.

(ii) There are four main aspects of message middleware:

(1) ActiveMQ: it has a long history and was frequently used in previous projects, but now it is updated slowly and its performance is relatively low. Kafka: distributed, high performance, cross-language, super high performance, millions per second, simple mode. (4) RocketMQ: Alibaba open source messaging middleware, pure Java implementation, commercial version, charging, leading to general promotion.

(c) Kafka’s advantages over the other three are:

(1) High performance, millions per second level;

(2) Distributed, high availability, horizontal expansion.

(4) Kafka official website

There is a Chinese official website, you can have a detailed look.

Address: kafka.apachecn.org/intro.html

2.2 Software Download

2.2.1 kakfa download

Address: kafka.apache.org/downloads, download the latest against 2.4.1.

2.2.2 zookeeper download

(1) Kafka relies on ZooKeeper for scheduling. Kafka is actually built with Kafka, but it is generally recommended to use an independent ZooKeeper to facilitate subsequent upgrades and public use.

(2) Download address:

Zookeeper.apache.org/, the latest version is 3.6.0, but it is not long ago. It is recommended to use the 3.5.7 version in accordance with kafka built-in ZooKeeper.

2.2.3 Download Instructions

The files are small. Zk is over 9m, kafka is over 50 megabytes

2.3 Kafka Single-node deployment and Cluster Deployment

** Software Lao Wang locally made three virtual machines, IP respectively:

192.168.85.158
192.168.85.168
192.168.85.178
Copy the code
2.3.1 Single-machine Deployment

(1) Upload the JAR package to the /root/tools directory. You do not need to create a new user. Run the jar package under the root account.

(2) Decompress

[root@ruanjianlaowang158 tools]# tar -zxvf kafka_2.12-2.4.1. TGZ [root@ruanjianlaowang158 Tools]# tar -zxvf Apache - they are - 3.5.7 - bin. Tar. GzCopy the code

(3) Configure zooKeeper and start it

[root@ruanjianlaowang158 apache-zookeeper-3.5.7-bin]# CD /root/tools/apache-zookeeper-3.5.7-bin Configure [root@ruanjianlaowang158 apache-zookeeper-3.5.7-bin]# mkdir data [root@ruanjianlaowang158 conf]# CD in the following configuration file /root/tools/apache-zookeeper-3.5.7-bin/conf [root@ruanjianlaowang158 conf]# cp zoo_sample.cfg zoo.cfg [root@ruanjianlaowang158 conf]# vi zoo. CFG DataDir =/ TMP /zookeeper dataDir=/root/tools/apache-zookeeper-3.5.7-bin/data # start zookeeper [root@ruanjianlaowang158 bin]# CD /root/tools/apache-zookeeper-3.5.7-bin/bin [root@ruanjianlaowang158 bin]#./ zkserver. sh startCopy the code

(4) Configure kafka and start it

[root@ruanjianlaowang158 kafka_2.12-2.4.1]# CD /root/tools/kafka_2.12-2.4.1 Create an empty folder [root@ruanjianlaowang158 kafka_2.12-2.4.1]# mkdir data Change the configuration file [root@ruanjianlaowang158 config]# CD /root/tools/kafka_2.12-2.4.1/config [root@ruanjianlaowang158 config]# vi Server.properties #listeners =/ TMP /kafka-logs log.dirs=/root/tools/kafka_2.12-2.4.1/data # process = PLAINTEXT: / / your. Host. Name: 9092 listeners = PLAINTEXT: / / 192.168.85.158:9092 # zookeeper. Connect = localhost: 2181 Zookeeper. connect=192.168.85.158:2181 # kafka [root@ruanjianlaowang158 bin]# CD /root/tools/kafka_2.12-2.4.1/bin [root@ruanjianlaowang158 bin]# ./zookeeper-server-start.sh .. /config/server.properties &Copy the code

After the startup, the single-machine verification is not verified, and the verification is carried out directly in the cluster.

2.3.2 Cluster Deployment

(1) Cluster mode: Decompress the preceding single-machine mode and configure it on servers 192.168.85.168 and 192.168.85.178.

(2) Zookeeper is changed to zoo.cfg

The three servers are the same:

[root@ruanjianlaowang158 conf]# CD /root/tools/apache-zookeeper-3.5.7-bin/conf [root@ruanjianlaowang158 conf]# vi Zoo. CFG # other unchanged, add the last three lines, three server configuration is the same, Software Laowang server.1=192.168.85.158:2888:3888 server.2=192.168.85.168:2888:3888 server.3=192.168.85.178:2888:3888 158 Echo "1" > /root/tools/apache-zookeeper-3.5.7-bin/data/ myID 168 Run the following command on the server: Echo "2" > /root/tools/apache-zookeeper-3.5.7-bin/data/myid 178 Echo "3" > / root/tools/apache - they are - 3.5.7 - bin/data/myidCopy the code

(3) Kafka cluster configuration

If you think the article is helpful to you, welcome to wechat search "software Lao Wang" for the first time to read or communicate!Copy the code
[root@ruanjianlaowang158 config]# CD /root/tools/kafka_2.12-2.4.1/config [root@ruanjianlaowang158 config]# vi Server.properties #broker.id 158 Server set to 1,168 Server set to 2,178 Server set to 3 broker.id=1 # All three servers are configured the same Zookeeper. Connect = 192.168.85.158:2181192168 85.168:2181192168 85.178:2181Copy the code

Kafka Broker Broker

Configuration items Default value/Example value instructions
broker.id 0 Broker unique identification
listeners PLAINTEXT: / / 192.168.85.158:9092 To listen for information, PLAINTEXT means PLAINTEXT transmission
log.dirs / root/tools/apache – they are – 3.5.7 – bin/data Kafka data storage address. You can enter multiple addresses. With “, “interval
message.max.bytes message.max.bytes Limit on the length of a single message, in bytes
num.partitions 1 Default number of partitions
log.flush.interval.messages Long.MaxValue The maximum number of messages accumulated before data is written to the hard disk and available to consumers
log.flush.interval.ms Long.MaxValue The maximum time before data is written to the hard disk
log.flush.scheduler.interval.ms Long.MaxValue The interval at which data is to be written to the hard disk.
log.retention.hours 24 Controls the retention time of a log, in hours
zookeeper.connect 192.168.85.158:2181,

192.168.85.168:2181,

192.168.85.178:2181
IP address of the ZooKeeper server. Multiple servers are separated by commas (,)

(4) The cluster starts

The startup mode is the same as that of a single machine:

[root@ruanjianlaowang158 bin]# CD /root/tools/apache-zookeeper-3.5.7-bin/bin [root@ruanjianlaowang158 bin]# Sh start # kafka [root@ruanjianlaowang158 bin]# CD /root/tools/kafka_2.12-2.4.1/bin [root@ruanjianlaowang158 bin]# ./zookeeper-server-start.sh .. /config/server.properties &Copy the code

(5) Pay attention

Kafka: Configured broker. Id 2 doesn't match stored broker. Id 0 in meta.properties. Solution: There is a file in server 158 data: meta.properties. The broker.id in the file also needs to be changed to be the same as the broker.id in server.Copy the code

(6) Create a topic for later springboot project test.

[root@ruanjianlaowang158 bin]# CD /root/tools/kafka_2.12-2.4.1/bin [root@ruanjianlaowang158 bin]#./kafka-topics - create - zookeeper 192.168.85.158:2181192168 85.168:2181192168 85.178:2181 - replication - factor 3 - partitions --topic aaaaCopy the code

2.4 Combine with springboot project

Against 2.4.1 pom file
<? The XML version = "1.0" encoding = "utf-8"? > < project XMLNS = "http://maven.apache.org/POM/4.0.0" XMLNS: xsi = "http://www.w3.org/2001/XMLSchema-instance" Xsi: schemaLocation = "http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd" > < modelVersion > 4.0.0 < / modelVersion > < the parent > < groupId > org. Springframework. Boot < / groupId > The < artifactId > spring - the boot - starter - parent < / artifactId > < version > 2.2.0. RELEASE < / version > < relativePath / > <! -- lookup parent from repository --> </parent> <groupId>com.itany</groupId> <artifactId>kafka</artifactId> <version>0.0.1-SNAPSHOT</version> <name>kafka</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency> </dependencies> <build>  <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>Copy the code

Description:

There are mainly two GAVs, one is spring-boot-starter-Web, which starts the use of Web services; One is spring-Kafka, which is the SpringBoot integrated kafka core package.

Application 2.4.2. Yml
Bootstrap-servers: bootstrap-servers: 192.168.85.158:9092192168 85.168:9092192168:85.178 9092 producer: key - serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer consumer: group-id: test key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializerCopy the code
2.4.3 Producers
@RestController public class KafkaProducer { @Autowired private KafkaTemplate template; // Software Lao Wang, Aaaa @requestMapping ("/sendMsg") public String sendMsg(String topic, String message){ template.send(topic,message); return "success"; }}Copy the code
2.3.4 Consumer
@component public class KafkaConsumer {// @kafkalistener (topics = {"aaaa"}) public void listen(ConsumerRecord record){ System.out.println(record.topic()+":"+record.value()); }}Copy the code
2.4.5 Verifying results

(1) Enter in the browser

http://localhost:8080/sendMsg?topic=aaaa&message=bbbb
Copy the code

(2) Software Lao Wang’s IDEA console printing information


More knowledge please pay attention to the public number: “software Lao Wang”, IT technology and related dry goods to share, reply keywords to get corresponding dry goods!