Kafka cluster setup

Cent OS 7 comes with Java1.6. You can use JDK directly to install Kafka. If you think the JDK version is too old, you can reinstall it yourself.

2, ready to kafka installation package, official website to download address: kafka.apache.org/downloads.h…

3. After downloading the Kafka installation package, decompress it to /usr/local and delete the package

4. A three-node Kafka cluster has been set up, which is located on 10.10.67.102, 10.10.67.104 and 10.10.67.106 servers respectively.

Kafka config directory:

Zookeeper: kafka zookeeper: kafka Zookeeper: Kafka Zookeeper: Kafka Zookeeper: Kafka Zookeeper: Kafka Zookeeper: Kafka Zookeeper: Kafka Zookeeper

The configuration of the zookeeper.properties file on the three machines is the same. It is important to note that the path for saving logs will not be automatically generated.

7, creating myid files into the/usr/local/kafka/zookeeper, create myid file, the three myid file on the server to write 1, 2, 3 respectively, as shown in figure:

—- myID is the ID used by the ZK cluster to discover each other. It must be created and must be different.

8. Go to kafka and run the zooKeeper command. The. / bin/zookeeper – server – start. Sh config/zookeeper properties & three machines perform startup command, check the zookeeper log file, start the zookeeper cluster the success without error.

Create kafka cluster, modify server.properties

The modification of the server.properties configuration file is mainly at the beginning and end, and the default Settings are used in the middle. The important point to note is the value of broker.id. The three nodes should be configured with different values, 0, 1, and 2 respectively. Log.dirs must ensure that the directory exists and is not automatically generated based on the configuration file.

10. Start the kafka cluster, go to the kafka directory, and run the following command:./bin/kafka-server-start.sh -daemon config/server.properties & Start all three nodes. Start without error, that is, build successfully, you can produce and consume messages to check whether build successfully.

How to produce and consume messages and common commands

  • Start the kafka
./bin/kafka-server-start.sh -daemon config/server.properties &
Copy the code
  • Create a topic – test
./bin/kafka-topics. Sh --create --zookeeper 10.10.67.102:2181, 10.10.67.104:2181, 10.10.67.106:2181 --replication-factor 3 -- Partitions 3 --topic testCopy the code
  • Lists the topics that have been created
./bin/kafka-topics.sh --list --zookeeper localhost:2181
Copy the code
  • Emulated the client to send the message
Sh --broker-list 10.10.67.102:9092, 10.10.67.104:9092, 10.10.67.106:9092 --topic test./bin/kafka-console-producer.sh --broker-list 10.10.67.102:9092, 10.10.67.106:9092 --topic testCopy the code
  • Emulated the client to receive the message
The. / bin/kafka - the console - consumer. Sh - zookeeper 10.10.67.102:2181, 10.10.67.104:2181, 10.10.67.106:2181 - from - beginning - topic testCopy the code
  • View the specified topic
./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Copy the code