preface

First, what is a log? Logs are programmatically generated text data that follows a format (usually including a timestamp)

System operation and maintenance personnel and related technical personnel monitor the operation and maintenance management of system applications through logs, and even perform some data analysis based on logs to ensure the stability of system application operation and help developers quickly locate errors.

After the project is migrated to K8S for deployment management, although it has been optimized in terms of elastic scaling and utilization of server resources, K8S does not provide relevant support in log management, because application services will be dynamically adjusted, that is, deployed to different servers. Therefore, application log management cannot be solved by mounting only log files. A log collection system is urgently needed to collect and manage application logs on the K8S in a unified manner to ensure the stability of applications.

EFK introduction

ELK(ElasticSearch+Logstash +Kibana) and EFK(ElasticSearch+ FileBeat+Kibana)

Logstash: Data collection and processing engine. Dynamically collect data from various data sources, filter, analyze, enrich, and format the data, and then store the data for future use.

Kibana: Visualization platform. It searches and displays index data stored in Elasticsearch. It makes it easy to display and analyze data with charts, tables and maps.

Elasticsearch: distributed search engine. It has the characteristics of high scalability, high reliability and easy management. It can be used for full-text retrieval, structured retrieval and analysis, and can combine the three. Elasticsearch is based on Lucene and is now one of the most widely used open source search engines, with Wikipedia, StackOverflow, Github and others building their own search engines based on it.

Filebeat: Lightweight data collection engine. Based on the original Logstash-fowarder source code. In other words: Filebeat is a new version of Logstash-fowarder and will be the first choice for ELK Stack in the Shipper.

EFK is actually a variant of the famous log system ELK, which is a lightweight log collection and analysis system. Based on the consideration of resources, we will choose EFK.

EFK advantage

The purpose of setting up EFK log analysis system is to aggregate logs to achieve the purpose of quick view and quick analysis. EFK can not only quickly aggregate daily logs, but also aggregate logs of different projects. For microservices and distributed architecture, log query is particularly convenient. And because the logs are stored in Elasticsearch, the query is very fast.

Architecture Description

Since all our applications are deployed on K8S based on Docker, K8s work nodes deployed on the docker container which is pod, the container log files will be stored in the node system directory/var/lib/docker/containers / * / *. The log I will FileBeat DaemonSet way to deploy k8s on each node Elasticsearch is also deployed on K8S and dynamically regulates the server. Therefore, data persistence needs to be managed in a distributed manner. The persistence of Elasticsearch is shared based on NFS. As an NFS client, each K8S node interacts with the NFS server and manages logs visually through Kibana.

NFS sharing mechanism: blog.csdn.net/qq_38265137…

The specific architecture diagram is as follows:

Environment to prepare

Server Description

Server IP describe system
192.168.1.100 K8s master node Ubuntu 16.04
192.168.1.101 K8s Working node 1, NFS client Ubuntu 16.04
192.168.1.102 K8s Working node 2, NFS client Ubuntu 16.04
192.168.1.103 NFS service side CentOS7

Basic Environment Description

Based on the environment version
K8S 1.16.0
hlem 3.0.1

EFK setup description

application version
Elasticsearch 7.7.0
FileBeat 7.7.0
Kibana 7.7.0

Set up the NFS

NFS Server

Perform the following operations on the NFS server:

1. Download the NFS

yum -y install nfs-utils
Copy the code

2. Create an NFS data store directory and grant permission to it

mkdir -p nfs/data/
chmod -R 777 /home/nfs/data/
Copy the code

3. Set permissions for the mount directory

vim /etc/exports
Copy the code

Enter the following:

/home/nfs/data  *(rw,no_root_squash,sync)
Copy the code

4. Save the Settings and start the service

exportfs -r
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
Copy the code

5. Check whether the data directory is mounted successfully

showmount -e 192.168.1.103
Copy the code

The console can see the mount directory

Export list for192.168.1.103: / home/NFS/data *Copy the code

NFS Client

Nfs-client Nfs-client-provisioner is the external provisioner of Kubernetes’ simple NFS, which does not provide NFS itself. An existing NFS server is required to provide storage

PV to{pvcName}-{namespace}-{pvName} naming format (on NFS server)

You only need to run the following command on the master node

Helm install nfs-client --set nfs.server=192.168.1.103,nfs.path=/home/ NFS /data azure/nfs-client-provisioner --namespace logsCopy the code

NFS. Server: NFS server IP address nfs.path: NFS mounting data directory Namespace: K8s namespace

The effect

Open the K8S panel, click Storage Classes to see the new NFS-client, or type kubectl get sc

Build Elasticsearch

Download Charts files

Enter the following command on the master node:

helm repo add azure http://mirror.azure.cn/kubernetes/charts/
helm repo update
helm pull elastic/elasticsearch
tar -zvf elasticsearch
Copy the code

Modify the configuration

Configure vi elasticSearch /values.yml as required

1. Change the minimum number of master nodes (required for a single node).

2. Set a Storage Class

Full configuration code: github.com/llzz9595/k8…

The deployment of

Enter the parent directory of ElasticSearch on the master node.

helm install   elasticsearch   elasticsearch --namespace logs
Copy the code

You can view elasticSearch deployment and mount status on the K8S monitoring panel. You can also run the kubectl get pv, PVC -n logs command to view elasticSearch deployment and mount status

Build FileBeat

Download Charts files

 helm pull elastic/filebeat
 tar -zvf filebeat
Copy the code

Modify the configuration

Configure vi filebeat/values.yml as required

Since most of our applications are Java applications, we need to merge the Java exception stack for line processing. The specific configuration is as follows:

filebeatConfig:
  filebeat.yml: | filebeat.config:       modules:
         path: ${path.config}/modules.d/*.yml
         reload.enabled: true
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          hints.enabled: true
          templates:
            - condition:
                equals:
                  # Java stack row logs appear in the namespace baAS
                  kubernetes.namespace: baas
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  # configure Java stack multi-line matching rules
                  multiline:
                    pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
                    negate: false
                    match: after
            - condition:
                equals:
                  kubernetes.namespace: kube-system
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
    output.elasticsearch:
      host: '${NODE_NAME}'
      hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'

Copy the code

Full configuration code: github.com/llzz9595/k8…

The deployment of

Go to the master node and enter the following command in the filebeat parent directory:

helm install   filebeat   filebeat --namespace logs
Copy the code

Build Kibana

Download Charts files

 helm pull elastic/kibana
 tar -zvf kibana
Copy the code

Modify the configuration

Modify the configuration vi Kibana /values.yml 1 as required. Configure the ingress. If the ingress is not configured, it can be accessed through NodePort

2. Configure the Display in Chinese

Full configuration: github.com/llzz9595/k8…

The deployment of

Enter the kibana parent directory on the master node and run the following command:

helm install   kibana   kibana --namespace logs
Copy the code

Configuration index

Access configuration path in ingress or kibana nodeIP:Port search fileBeat and add index

See the log

After the configuration is complete, you can view all logs collected in log management

Filter K8S logs

In the preceding fileBeat collection configuration, k8S-related information is configured. Therefore, application K8S information, including namespace,pod-name, and pod-ID, is automatically collected during log saving.

Screening results:

conclusion

The above process realizes the rapid construction of K8S log collection and analysis system EFK, related code at github.com/llzz9595/k8… You can directly download the deployment. Through EFK to achieve K8S log collection and analysis, to ensure the stability of the system, but because Kibana default does not support user rights control, that is, login authentication and so on, so it is not very safe for production, generally through nginx configuration password file Kibana login authentication.