This is the second day of my participation in the November Gwen Challenge. See the link for details: The last Gwen Challenge 2021

I. Introduction:

ELK stands for Elasticsearch, Logstash, and Kibana

Elasticsearch is an open source distributed search engine that collects, analyzes, and stores data.

Its features are: distributed, zero configuration, automatic discovery, index automatic sharding, index copy mechanism, restful interface, multi-data sources, automatic search load and so on.

2. Logstash is a tool for collecting, analyzing and filtering logs and supports a large number of data acquisition methods.

The client is installed on the host that needs to collect logs. The server filters and modifies the received node logs and sends them to ElasticSearch at the same time.

3. Kibana’s log analysis friendly Web interface for Logstash and ElasticSearch helps aggregate, analyze and search important data logs.

4. FileBeat is a lightweight log collection and processing tool (Agent). FileBeat occupies less resources and is suitable for collecting logs from various servers and transferring them to Logstash.

Two, environmental preparation

1. Local DNS resolution

Cat /etc/hosts 192.168.100.100 node1.stu.com 192.168.100.10 node2.stu.comCopy the code

2. File descriptor

/etc/systemd/system.conf
/etc/systemd/user.conf
 
DefaultLimitNOFILE=65536
DefaultLimitNPROC=65536
Copy the code

3. Time synchronization

yum install ntpdate -y
 
*/5 * * * * /usr/sbin/ntpdate ntp.aliyun.com
Copy the code

Third, ELASTICSEARCH

# 1. Installation:

Yum install wget – y Java – 1.8.0 comes with – its artifacts. The elastic. The co/downloads/e… The RPM – the ivh elasticsearch – 6.4.0. RPM

2, configuration,

vim /etc/elasticsearch/elasticsearch.yml cluster.name: elk-cluster node.name: node1 path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch bootstrap.memory_lock: True LimitMEMLOCK = infinity network. Host: 192.168.100.100 HTTP. Port: 9200 discovery. Zen. Ping. Unicast. Hosts: [" 192.168.100.100 192.168.100.10 ", ""]Copy the code

Bootstrap. memory_lock: true The service locks enough memory to prevent data from being written. Swap LimitMEMLOCK= Infinity Port 9300 is used to transfer data to other nodes in the cluster and 9200 is used to accept HTTP requests

vim /etc/elasticsearch/jvm.options
-Xms1g
-Xmx1g
 
systemctl start elasticsearch
systemctl enable elasticsearch
Copy the code

3, test,

http://192.168.100.1:9200/

Fourth, the logstash

1. Installation:

Yum install Java – 1.8.0 comes with – its. X86_64 – y wget artifacts. The elastic. Co/downloads/l… The RPM – the ivh logstash – 6.4.0. RPM

2, configuration,

Collect system kernel logs: chmod 644 /var/log/messages

vim /etc/logstash/conf.d/syslog.conf input { file { path => "/var/log/messages" type => "systemlog" start_position => "Beginning" stat_interval => "2"}} output {elasticSearch {hosts => [" 192.168.100.100/9200 "] index => "logstash-systemlog-%{+YYYY.MM.dd}" } }Copy the code

Check whether the configuration file syntax error: / usr/share/logstash/bin/logstash – tf/etc/logstash/conf. D/syslog. Conf

systemctl start logstash systemctl enable logstash

Curl -xget ‘localhost:9600/? pretty’

Port 9600: API to retrieve runtime metrics about Logstash

Fifth, kibana

1. Installation:

The RPM – the ivh kibana – 6.4.0 – x86_64. RPM

2, configuration,

Yml server.port: 5601 server.host: "192.168.100.100" ElasticSearch. url: "Http://192.168.100.100:9200" systemctl start kibana systemctl enable kibanaCopy the code

3, validation,

http://192.168.100.100:5601/status

4, Nginx + Kibana configuration is as follows:

http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer"  ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; include /etc/nginx/conf.d/*.conf; server { listen 80; server_name kibana.stu.com; auth_basic "kibana auth password"; auth_basic_user_file /etc/nginx/kibana.auth; #root /usr/share/nginx/html; include /etc/nginx/default.d/*.conf; location / { proxy_pass http://kibana; Upstream =1 max_fails=2 fail_timeout=2; }}Copy the code

Summary: The installation and deployment of ELK is very simple, and it is worth spending time to learn it in daily work.