Installation of the JDK is very simple, here we can install the JDK 8 (although the current version of the JDK is 16, but a large part of the domestic production environment is still stuck with the 8 version). First, go to the JDK download address and download the JDK installation package of the corresponding system. Here, I use Mac system and download the DMG file.

Once the download is complete, double-click the DMG file and proceed to the next step to complete the installation.

The 2.13.1 version of Scala installed here is Scala 2.13.1. After the download is completed, unzip it directly to the current directory.

Gradle Kafka code dependencies are managed by Gradle. We need to download the Gradle 6.8 zip from the download address and unzip it directly into the current directory.

After installing the JDK, Scala, and Gradle, we open the command line, jump to the current user’s root directory, and open bash_profile

1 sudo vim.bash_profile Configure the three environment variables JAVA_HOME, SCALA_HOME, and GRADLE_HOME in the bash_profile file and add them to the PATH variable as shown in the figure below:

Next, save the bash_profile file and flush it with the source command:

Finally, execute the java-version, scala-version, and gradle-version commands to check that the environment variables were successfully configured. You get the output shown in the following figure, indicating that the configuration was successful:

Before version 2.8.0, Kafka relied on ZooKeeper to store metadata information. Starting with version 2.8.0, Kafka no longer relied heavily on ZooKeeper, but implemented the RAFT protocol itself to store metadata.

Here again we set up a pseudo-distributed ZooKeeper environment. First we go to the ZooKeeper official website to download the ZooKeeper 3.6.3 version of the zip package. Once the download is complete, we directly unzip the apache-zookeeper-3.6.3-bin.tar.gz package into the current directory.

Then go to the ${ZOOKEEPER_HOME}/conf directory and copy the zoo_sample.cfg file and rename it zoo.cfg:

1 cp zoo_sample.cfg zoo.cfg Finally, enter the ${ZOOKEEPER_HOME}/bin directory and start the ZooKeeper service:

1./ zkserver.sh start ZooKeeper/zkserver.sh start ZooKeeper/zkserver.sh start ZooKeeper/zkserver.sh start ZooKeeper

With the above basic dependencies installed, we are ready to start building the Kafka source environment.

First, download the Kafka source code by executing the git clone command:

1 git clone https://github.com/apache/kaf… When the download is complete, enter the Kafka source directory and execute git checkout to checkout the origin/2.8 branch:

In order to import Kafka source code into the IDEA editor, we need to execute the gradle IDEA command. This command will download Kafka dependencies. This command takes a long time. The output will be as follows:

Finally, import the Kafka source code into IDEA and get the project results as shown in the figure below:

In Kafka, many request and response classes are generated during the compilation process. Now we need to execute./gradlew jar to generate these classes.

After executing the./gradlew jar command, we need to find the generated directory in the following modules in IDEA and add the Java directory to the classpath

Next, let’s verify that the Kafka source code environment has been successfully set up.

First, we will copy the log4j.properties configuration file from the conf directory to the SRC /main/scala directory, as shown in the figure below: client module

Next, we modify the server.properties file in the conf directory, which will modify the log.dir configuration entry to point to the kafka-logs directory in the Kafka source directory:

1 the dirs = / Users/XXX/source/kafka/kafka – logs for server other configuration temporarily don’t have to change in the properties file.

Finally, we configure the entry class Kafka.Kafka in IDEA to start the Kafka Broker, as shown in the figure below:

If the boot is successful, the console output is normal and you see the following output:

Possible problems for P.S. :

Adjust the scope of the SLF4J-log4J12 dependency to Runtime, as shown in the figure below:

Send, consume message here we use Kafka’s own script tools to verify the above set up the Kafka source environment

First, we go to the ${KAFKA_HOME}/bin directory and create a topic named test_topic with the command kafka-Topics. Sh:

1 ./kafka-topics.sh –zookeeper localhost:2181 –create –topic test_topic –partitions 1 –replication-factor 1 The execution effect is shown in the figure below:

We then launch a consumer on the command line with the kafka-console-consumer.sh command to consume test_topic as follows:

1./kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic test_topic

The command line hangs, and when a message is received, it is printed on the command line.

Next, we start a producer on the command line with the command kafka-console-producer.sh to generate data for test_topic as follows:

1./kafka-console-producer.sh –bootstrap-server localhost:9092 –topic test_topic –property parse. Key =true

The command line will hang, and when we type a message and enter, the message will be sent to test_topic.

So far, Kafka’s broker, producer, and consumer have all been started. Let’s enter a message hello YangSizheng with a key in the producer (the middle is \t, not the space). The part before \t is key, the part after \t is value). The effect is as follows:

After we type a message and enter, we will receive the message at Consumer, as shown in the figure below:

In summary, we focused on the Kafka 2.8.0 version of the source environment.

First, we downloaded and installed the JDK, Scala, Gradle, and other basic software, and configured their environment variables. We then installed ZooKeeper and started the ZooKeeper service. Next, we downloaded the latest Kafka source code through Git, compiled and started the Kafka broker. Finally, we completed sending and consuming messages through the command-line Producer and Consumer of Kafka. Achieve the effect of verifying the source environment. Thank you for watching, and the related articles and videos will be posted later

WeChat: