This article is participating in “Java Theme Month – Java Development in Action”, see the activity link for details

The opening

This is the second day of my participation in Gwen Challenge

If you want to monitor an enterprise’s IT infrastructure or achieve end-to-end, full-link monitoring of the entire software, you can use Elastic Stack, which is a big data platform’s technology Stack, to provide a complete set of technical solutions for the operation and maintenance monitoring vertical, from log analysis to metrics monitoring. From software performance monitoring to availability monitoring, there are product-level solutions out of the box.

This article will teach you how to install and use

Technology selection

Filebeat 7.3.0

kafka 1.1.0

logstash 7.3.0

elasticserarch 7.1.1

kibana 7.1.1

zookeeper

Download mirror

Docker Pull Wurstmeister/Kafka :1.1.0 Docker Pull Wurstmeister/ZooKeeper Docker Pull Kibana :7.1.1 Docker pull Logstash :7.3.0 Docker pull grafana/grafana:6.3.2 Docker pull ElasticSearch :7.1.1 Docker pull mobz/elasticsearch-head:5-alpineCopy the code

Create a container

# elasticsearch-head docker run -d --name elasticsearch-head --network host mobz/elasticsearch-head:5-alpine # Elasticsearch docker run -d --name elasticSearch --network host -e "discovery.type=single-node" elasticSearch :7.1.1 # Kibana docker run -d --name kibana --network host -p 5601:5601 kibana:7.1.1 # logstash docker run -d --name logstash --network host -d logstash:7.3.0 # mysql docker run --name mysql -e MYSQL_ROOT_PASSWORD=root --network host -d Mysql :latest # grafana docker run -d --name grafana --network host grafana/grafana:6.3.2 [containerID] # # visit http://192.168.104.102:9100/ application elasticsearch - head http://192.168.104.102:9200/ elasticsearch http://192.168.104.102:3000/ grafana mysql root/admin/admin http://192.168.104.102:3306/ rootCopy the code

Modify elasticSearch. Yml to append

http.cors.enabled: true
http.cors.allow-origin: "*"
Copy the code

See using Docker to quickly build Kafka development environment

Download and install FileBeat

# # download filebeat wget HTTP: / / https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.0-linux-x86_64.tar.gz Gz tar -zxvf filebeat-7.3.0-linux-x86_64.tar.gz ## FileBeat test config -c /filebeat -e -c my-filebeat. Yml -d "publish" # My-filebeat. yml > / TMP /filebeat.log 2>&1 & # /filebeat -e -c filebeat. Yml >/dev/null 2>&1 & # filebeat /filebeat -e -c filebeat.yml -d "publish" # -e: outputs to the standard output, which is syslog and logs by default. -c: specifies the configuration fileCopy the code

Filebeat configuration file

filebeat.inputs: - type: log enabled: true paths: - /opt/test.log fields: serviceName: jp-filebeat fields_under_root: Kafka: enabled: true hosts: ["kafka1:9092"] topic: 'stream-in' required_acks: 1 ## note # kafka1 is hostname, you can modify the /etc/hosts file, add kafka_cluster IP -> kafka1Copy the code
filebeat.inputs: - type: log enabled: true paths: - /opt/*.log fields: serviceName:jp-filebeat fields_under_root: Logstash: hosts: ["192.168.104.102:5045"]Copy the code

Check the TCP connection between FileBeat and Kafka

Write to the test log fileecho '321312' >> test.log

logstash

  • Linux installation of logstash
  • Logstash Docker installation

Default startup configuration file for the container/usr/share/logstash/pipeline/logstash. Conf

Test a logstash (standard input/output) bin/logatsh -e ‘input{stdin {}} output {stdout {}}

Tip:

Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the “path.data” setting

In the original command bin/logstash config – f/logstash – sample. At the end of the conf plus — path. Data = / usr/share/logstash/jpdata

Bin /logstash -f config/logstash-sample.conf --config.reload. Automatic -- path. Data = / usr/share/logstash jpdata # start multiple logstash instance command bin/logstash config - f/test - logstash. Conf -- config. Reload. Automatic - path. Data = / usr/share/logstash jpdata # start to check whether the configuration file is correct before bin/logstash -f logstash - sample. Conf --config. Test_and_exit # parameter Description --path.data indicates the path where data is stored --config.reload. Automatic Hot loads the configuration fileCopy the code

Logstash configuration file

test-logstash.conf

Input {kafka {bootstrap_Servers => "192.168.104.102:9092" topics => ["stream-in"] codec => "json"}} # Filter {grok {match => {"message" => "%{COMBINEDAPACHELOG}"}}} #} # rubydebug} }Copy the code

logstash-es.conf

Input {kafka {bootstrap_Servers => "192.168.104.102:9092" topics => ["stream-in"] codec => "json"}} # Elasticsearch output {elasticSearch {hosts => [192.168.104.102:9200 ""] index = >" logstash - % {+ YYYY. MM. Dd} "}}Copy the code

kafka

Kafka in docker is installed in /opt/kafka_2.12-2.3.0/bin.

# yum -y install tree # yum -y install tree # yum -y install tree # yum -y install tree # yum -y install tree # yum -y install tree # yum -y install tree # yum -y install tree # yum -y install tree # yum -y install tree # Query group id for all topics. Group id for all topics. Sh --bootstrap-server kafka1:9092 --list # insert data into topic kafka-console-producer.sh --broker-list Kafka1 :9092 --topic stream-in # Kafka  kafka-console-consumer.sh --bootstrap-server kafka1:9092 --topic stream-in -from-beginningCopy the code

zookeeper

./zkCli.sh

A reference in case of a problem

  1. Kafka failed to connect because the host name was used
  2. How to avoid registering Kafka Broker machine hostname with ZooKeeper
  3. Configure the Kafka output
  4. Kafka query topic and consumer group status
  5. Kafka Shell basic commands
  6. Kibana opens Chinese language
  7. mysql Client does not support authentication protocol
  8. Check the FileBeat processps -ef | grep filebeat

Description of filebeat&kafka compatibility (subject to official description)

👍 ❤️ Don’t get lost

The article continues to update every week, you can search wechat “ten minutes to learn programming” the first time to read and urge more, if this article is not bad, feel something if you support and recognition, is the biggest power of my creation, we will see the next article!