These reviews

In the previous article, we introduced the background to distributed logging systems. What are the differences and differences between cloud native container-based log collection and traditional log collection? Then the ELKB distributed log system is introduced. This article will continue to introduce how to build a cloud native logging platform based on EFK.

Prerequisites: Data storage persistence

The storage of logs involves persistence related issues. Therefore, data K8S persistent stores, PV and PVC need to be created before deployment. Our practice involves 3 hosts, and the information is as follows:

role IP
master 192.168.11.195
node1 192.168.11.196
node2 192.168.11.197

Select Node1:192.168.11.196 as the NFS server.

RPCBIND yum -y install nfs-utils RPCBIND mkdir -pv /ifs/kubernetes systemctl enable RPCBIND systemctl enable RPCBIND systemctl enable nfs systemctl start nfsCopy the code
$ vim /etc/exports

/ifs/kubernetes *(rw,no_root_squash)

Copy the code
exportfs -r

Copy the code

After executing the command above, test it on another machine:

The mount -t NFS 192.168.11.196: / ifs/kubernetes/MNT /Copy the code

Create three new folders on the NFS server (namely k8snode1) :

mkdir -pv /ifs/kubernetes/{pv00001,pv00002,pv00003}
Copy the code

Using the dynamic distribution of PV.

Yaml kubectl apply-f deployment.yaml kubectl apply-f rbac.yaml kubectl apply-f rbac.yaml kubectl apply-f rbac.yaml kubectl apply-f rbac Kubectl apply -f deployment-pvc.yaml #Copy the code

After running the preceding command, you can view pv and PVC information:

Yaml, deployment.yaml, rbac.yaml and deployment-PVc. yaml are relatively simple, so they are not listed here. If necessary, they can be obtained from the public account in the background.

Install EFK

Based on the PV storage created above, we start the following services:

Yaml kubectl apply -f filebeat. Yaml # kibana.yaml kubectl apply -f filebeat -f kibana.yamlCopy the code

Using the command above, you can deploy the three prices. Let’s look at the configuration files for each of the three components.

elasticsearch.yaml

The resource files for ElasticSearch are as follows:

apiVersion: apps/v1 kind: StatefulSet metadata: name: elasticsearch namespace: kube-system labels: k8s-app: elasticsearch spec: serviceName: elasticsearch selector: matchLabels: k8s-app: elasticsearch template: metadata: Labels: K8S-app: ElasticSearch Spec: Containers: -image: ElasticSearch :7.3.2 Name: ElasticSearch Resources: limits: labels: K8S-app: ElasticSearch Spec: Containers: -image: ElasticSearch :7.3.2 Name: ElasticSearch resources: limits: CPU: 1 memory: 2Gi requests: CPU: 0.5 memory: 500Mi env: -name: "discovery.type" value: "single-node" -name: ES_JAVA_OPTS value: "-Xms512m -Xmx2g" ports: - containerPort: 9200 name: db protocol: TCP volumeMounts: - name: elasticsearch-data mountPath: /usr/share/elasticsearch/data volumeClaimTemplates: - metadata: name: elasticsearch-data spec: storageClassName: "managed-nfs-storage" accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Gi --- apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: kube-system spec: clusterIP: None ports: - port: 9200 protocol: TCP targetPort: db selector: k8s-app: elasticsearchCopy the code

Example Specify the version of elasticSearch as 7.3.2, configure 1 core 2 GB, mount managed-nfs-storage, Service exposes the DB port of Elasticsearch.

filebeat.yaml

Next, install FileBeat, a lightweight data collection engine. We are also based on the K8S platform. The configuration file is as follows:

--- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.config: inputs: reload.enabled: false modules: path: ${path.config}/modules.d/*.yml output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] --- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-inputs namespace: kube-system labels: k8s-app: filebeat data: kubernetes.yml: |- - type: docker containers.ids: - "*" processors: - add_kubernetes_metadata: in_cluster: true --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: Filebeat terminationGracePeriodSeconds: 30 containers: - name: filebeat image: elastic/filebeat: 7.3.2 args. [ "-c", "/etc/filebeat.yml", "-e", ] env: - name: ELASTICSEARCH_HOST value: elasticsearch - name: ELASTICSEARCH_PORT value: "9200" securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: inputs mountPath: /usr/share/filebeat/inputs.d readOnly: true - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true volumes: - name: config configMap: defaultMode: 0600 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: inputs configMap: defaultMode: 0600 name: filebeat-inputs - name: data hostPath: path: /var/lib/filebeat-data type: DirectoryOrCreate --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods verbs: - get - watch - list --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat ---Copy the code

The version of the FileBeat image is 7.3.2, and the data source is configured using configMap. Filebeat is based on the Logstash-fowarder source code. In other words: Filebeat is a new version of Logstash-fowarder and will be the first choice for ELK Stack in the Shipper.

kibana.yaml

Kibana is used for visual presentation of logs. It searches and displays index data stored in Elasticsearch. It makes it easy to display and analyze data with charts, tables and maps.

apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: kube-system labels: k8s-app: kibana spec: replicas: 1 selector: matchLabels: k8s-app: kibana template: metadata: labels: k8s-app: kibana spec: containers: - name: Kibana image: Kibana :7.3.2 resources: limits: CPU: 1 memory: 500Mi requests: CPU: 0.5 memory: 200Mi env: - name: ELASTICSEARCH_HOSTS value: http://elasticsearch-0.elasticsearch.kube-system:9200 - name: I18N_LOCALE value: zh-CN ports: - containerPort: 5601 name: ui protocol: TCP --- apiVersion: v1 kind: Service metadata: name: kibana namespace: kube-system spec: type: NodePort ports: - port: 5601 protocol: TCP targetPort: ui nodePort: 30601 selector: k8s-app: kibanaCopy the code

Kibana exposed the nodePort as 30601. This is the host port that we access.

At this point, we check the POD and SVC, and find that they are already in the running state. The three components are installed successfully.

Log Query result

With the above component installation, we can build indexes to search for log information.

Access the host IP + 30601 port corresponding to K8S Mater, and the result is as follows:

Next, we need to create an index: filebeat-7.3.2-*

After the index is created, you can retrieve the corresponding keywords on the search page.

Check K8S for pod, namespace, and service logs, as shown above.

Microservice log retrieval

The above is to collect and search the logs related to K8S components, so how to collect our micro-service logs? First, we deploy a microservice like WebDemo.

kubectl create deployment webdemo --image=nginx

kubectl expose deployment webdemo --port=80 --target-port=80 --name=webdemo --type=NodePort
Copy the code

The process is shown in the figure above. Let’s check if it is created successfully:

To simulate requests from the browser to generate logs, we need to look at the webDemo port number:

Service configuration shows that the Nodeport of WebDemo is 32008. Request from the command line:

The curl http://192.168.11.195:32008/Copy the code

At the same time, open the container service log of WebDemo and check whether the corresponding log is generated:

You can see that the corresponding log is generated above, and then you can retrieve the log on Kibana.

The logs are successfully obtained. Procedure The above configuration shows that this method can obtain microservice logs normally.

summary

In two articles, we introduced concepts related to cloud native log collection and practices based on EFK components, from installation to log retrieval.

K8s features a container log output console, and Docker itself provides a log collection capability. Build elasticSearch + FileBeat + Kibana log platform through K8S, successfully collect container, K8S component and micro-service logs, and display them graphicallythrough Kibana.

Read the latest article, pay attention to the public number: AOHO Qiusuo