preface

What is a EFAK

EFAK (Eagle For Apache Kafka, formerly known as Kafka Eagle) is open source visualization and management software. Can query, visualization, monitoring Kafka cluster, kafka cluster data is converted into graphical visualization tools.

Why EFAK

  • Apache Kafka does not officially provide monitoring systems or pages.

  • The open source Kafka monitoring system has too few features or suspended maintenance.

  • Existing monitoring systems are difficult to configure and use.

  • Some monitoring systems cannot satisfy the integration with existing IM, such as wechat and Dingding.

The installation

download

You can download the EFAK source code on GitHub to compile and install it yourself, or you can download the binary.tar.gz file.

EFAK The repository
Github Github.com/smartloli/E…
download download.kafka-eagle.org/

Ps: You are advised to use an official binary installation package

Install the JDK

If you have a JDK environment on your Linux server, you can skip this step and proceed to the next installation. If you do not have the JDK, download it from the Oracle official website.

The JAVA_HOME configuration decompresses the binary installation package to the specified directory:

cd/usr/java tar -zxvf jdK-xxxx.tar. gz mv jdk-xxxx jdk1.8 vi /etc/profileexportJAVA_HOME = / usr/Java/jdk1.8export PATH=$PATH:$JAVA_HOME/bin
Copy the code

We then use. /etc/profile for the configuration to take effect immediately.

Extract EFAK

Here we unzip to /data/soft/new and unzip:

tar -zxvf efak-xxx-bin.tar.gz
Copy the code

If the version has been installed before, delete the modified version and rename the current version, as shown in the following figure:

rm -rf efakmv efak-xxx efak

Copy the code

Then, configure the EFAK configuration file

vi /etc/profileexport KE_HOME=/data/soft/new/efakexport PATH=$PATH:$KE_HOME/bin
Copy the code

Finally, we use. /etc/profile for the configuration to take effect immediately.

EFAK system file Configure EFAK based on the actual Kafka cluster conditions, such as the ZooKeeper address, Kafka cluster version (zK is of an earlier version, Kafka is of an earlier version), and Kafka cluster with security authentication enabled.

cd ${KE_HOME}/conf
vi system-config.properties

# Multi zookeeper&kafka cluster list -- The client connection address of the Zookeeper cluster is set here
efak.zk.cluster.alias=cluster1,cluster2
cluster1.zk.list=tdn1:2181,tdn2:2181,tdn3:2181
cluster2.zk.list=xdn1:2181,xdn2:2181,xdn3:2181

# Add zookeeper acl
cluster1.zk.acl.enable=false
cluster1.zk.acl.schema=digest
cluster1.zk.acl.username=test
cluster1.zk.acl.password=test123

# Kafka broker nodes online list
cluster1.efak.broker.size=10
cluster2.efak.broker.size=20

# Zkcli limit -- Zookeeper cluster allows the number of clients to connect to
# If you enable distributed mode, you can set value to 4 or 8
kafka.zk.limit.size=16

# EFAK webui port -- WebConsole port access address
efak.webui.port=8048

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# EFAK enable distributed
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
efak.distributed.enable=false
# master worknode set status to master, other node set status to slave
efak.cluster.mode.status=slave
# deploy efak server address
efak.worknode.master.host=localhost
efak.worknode.port=8085

# Kafka offset storage -- Offset stored in a Kafka cluster, if stored in the zookeeper, you can not use this option
cluster1.efak.offset.storage=kafka
cluster2.efak.offset.storage=kafka

# Whether the Kafka performance monitoring diagram is enabled
efak.metrics.charts=false

# EFAK keeps data for 30 days by default
efak.metrics.retain=30

# If offset is out of range occurs, enable this property -- Only suitable for kafka sql
efak.sql.fix.error=false
efak.sql.topic.records.max=5000

# Delete kafka topic token -- Set to delete the topic token, so that administrators can have the right to delete
efak.topic.token=keadmin

# Kafka sasl authenticate
cluster1.efak.sasl.enable=false
cluster1.efak.sasl.protocol=SASL_PLAINTEXT
cluster1.efak.sasl.mechanism=SCRAM-SHA-256
cluster1.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
# If not set, the value can be empty
cluster1.efak.sasl.client.id=
# Add kafka cluster cgroups
cluster1.efak.sasl.cgroup.enable=false
cluster1.efak.sasl.cgroup.topics=kafka_ads01,kafka_ads02

cluster2.efak.sasl.enable=true
cluster2.efak.sasl.protocol=SASL_PLAINTEXT
cluster2.efak.sasl.mechanism=PLAIN
cluster2.efak.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
cluster2.efak.sasl.client.id=
cluster2.efak.sasl.cgroup.enable=false
cluster2.efak.sasl.cgroup.topics=kafka_ads03,kafka_ads04

# Default use sqlite to store data
efak.driver=org.sqlite.JDBC
# It is important to note that the '/hadoop/kafka-eagle/db' path must be exist.
efak.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
efak.username=root
efak.password=smartloli

# (Optional) set mysql address
#efak.driver=com.mysql.jdbc.Driver
# efak. Url = JDBC: mysql: / / 127.0.0.1:3306 / ke? useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
#efak.username=root
#efak.password=smartloli
Copy the code

Start the EFAK server (standalone) In the $KE_HOME/bin directory, there is a ke.sh script file. Run the following startup command:

cd ${KE_HOME}/binchmod +x ke.sh ke.sh start

Copy the code

After that, when the EFAK server restarts or stops, execute the following command:

ke.sh restartke.sh stop

Copy the code

As shown below:

Start the EFAK server (distributed)

In the $KE_HOME/bin directory, there is a ke.sh script file. Run the following startup command:

cd ${KE_HOME}/bin
# sync efak package to other worknode node
# if $KE_HOME is /data/soft/new/efak
for i in `cat $KE_HOME/conf/works`;do scp -r $KE_HOME $i:/data/soft/new;done

# sync efak server .bash_profile environment
for i in `cat $KE_HOME/conf/works`;do scp -r ~/.bash_profile $i: ~ /;done

chmod +x ke.sh 
ke.sh cluster start
Copy the code

After that, when the EFAK server restarts or stops, execute the following command:

ke.sh cluster restart
ke.sh cluster stop
Copy the code

As shown below:

use

The dashboard

View Kafka brokers, topics, consumer, Zookeepers, and more

Create a theme

Lists the topic

This module tracks all topics in a Kafka cluster, including partition count, creation time, and modified topics, as shown in the following figure:

The theme details

Each Topic corresponds to a hyperlink, and you can view the details of the Topic as shown in the following figure:

consumption

The data early warning

Configure the user name and password of the mail server, you can see the corresponding early warning data

And the final big screen display

Above is EFAK installation and make simple use of the specific project, there are many functions, recommend you can use EFAK as Kafka visual management software

My wechat official number: Java Architect Advanced programming focus on sharing Java technology dry products, including JVM, SpringBoot, SpringCloud, database, architecture design, interview questions, e-books, etc., looking forward to your attention!