EFK (Elasticsearch + Fluentd + Kibana) is an official log collection solution for Kubernetes. Let’s learn how Fluentd collects Kubernetes cluster logs and celebrate Fluentd’s graduation from CNCF. Before you start, hopefully you’ve read the Docker container log analysis, this is the second in a long series.

This is different from ELK(Elasticsearch + Logstash + Kibana) and EFK(Elasticsearch + Filebeat + Kibana), which are usually native.

CNCF, which stands for Cloud Native Computing Foundation, also owns Kubernetes, or most of the container Cloud projects.

Deploy EFK

The yamL file used to deploy EFK in K8s is at github.com/kubernetes/… , which you can download using the script provided in the appendix to this article.

After downloading, run CD Fluentd – ElasticSearch && kubectl apply -f. Command for deployment.

Check elasticSearch and Kibana Service:

$ kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch-logging NodePort 10.97.248.209 <none> 9200:32126/TCP 23d Kibana-logging ClusterIP 10.103.126.183 < None > 5601/TCP 23dCopy the code

Check fluentd DaemonSet:

$kubectl get ds-n kube-system NAME DESIRED CURRENT READY up-to-date AVAILABLE NODE SELECTOR AGE FluentD-ES-v2.4.0 2 2 2 2 2 <none> 23dCopy the code

Here we know that FluentD is run as Daemonset and es and Kibana as Service.

Note that elasticSearch’s default deployment file is not persistent. If you want it to be persistent, you need to adjust its PVC Settings.

Fluentd function analysis

  1. Check the type of FluentD, nothing to say

    ApiVersion: apps/ V1 kind: DaemonSet metadata: Name: FluentD-ES-v2.2.1 Namespace: Kube-systemCopy the code
  2. View FluentD log collection

    Containers: name: Fluentd-es image: k8s.gcr. IO/Fluentd-ElasticSearch :v2.2.0... volumeMounts: - name: varlog mountPath: /var/log
    - name: varlibdockercontainers
      mountPath: /var/lib/docker/containers
      readOnly: true
    - name: config-volume
      mountPath: /etc/fluent/config.d
    
    ...
    
    volumes:
    - name: varlog
    hostPath:
      path: /var/log- name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: config-volume configMap: name: Fluentd - es - config - v0.1.6Copy the code

    Here you can clearly see that fluentd daemonset manner, then the system/var/lib/docker/containers to mount, this directory we introduced in the docker container log analysis, This is where the Docker container logs are stored, so FluentD completes reading the container’s default logs.

    The fluentD configuration file is loaded in the configMap format.

  3. Collect container log configuration

    Containers.input.conf is used to collect container logs, as follows:

    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @typemulti_format <pattern> format json time_key time time_format %Y-%m-%dT%H:%M:%S.%NZ </pattern> <pattern> format /^(? <time>.+) (? <stream>stdout|stderr) [^ ]* (? <log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>
    Copy the code

    Carefully you will find a mount of container directory is/var/lib/docker/containers, log should be here, but the configuration directory of listening is/var/log/containers. The official intimate gave the annotation, the main content is as follows:

     # Example
        # = = = = = = =
        #...
        #
        # The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
        # record & add labels to the log record if properly configured. This enables users
        # to filter & search logs on any metadata.
        # For example a Docker container's logs might be in the directory:
        #
        # /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
        #
        # and in the file:
        #
        # 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
        #
        # where 997599971ee6... is the Docker ID of the running container.
        # The Kubernetes kubelet makes a symbolic link to this file on the host machine
        # in the /var/log/containers directory which includes the pod name and the Kubernetes
        # container name:
        #
        # synthetic - logger - 0.25 LPS - pod_default_synth - LGR - 997599971 ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b. Log
        # - >
        # /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79 286ad45ff37a74494f262e3bc98d909d0a7b-json.log
        #
        # The /var/log directory on the host is mapped to the /var/log directory in the container
        # running this instance of Fluentd and we end up collecting the file:
        #
        # The/var/log/containers/synthetic - logger - 0.25 LPS - pod_default_synth - ee6366d4a5920d25b79286ad45ff37a74494f262e3bc LGR - 997599971 98d909d0a7b.log
        #
    Copy the code
  4. Logs are uploaded to ElasticSearch

    output.conf: |-
        <match **>
          @id elasticsearch
          @type elasticsearch
          @log_level info
          type_name _doc
          include_tag_key true
          host elasticsearch-logging
          port 9200
          logstash_format true
          <buffer>
            @type file
            path /var/log/fluentd-buffers/kubernetes.system.buffer
            flush_mode interval
            retry_type exponential_backoff
            flush_thread_count 2
            flush_interval 5s
            retry_forever
            retry_max_interval 30
            chunk_limit_size 2M
            queue_limit_length 8
            overflow_action block
          </buffer>
        </match>
    Copy the code

    Note that host and port are defined in elasticSearch Service. If they are changed, they must be consistent. Fluentd also supports uploading log data to external ElasticSearch, which is native to ELk/EFk.

The appendix

  1. Architecture diagram

  1. Download the script file download.sh

    for file in es-service es-statefulset fluentd-es-configmap fluentd-es-ds kibana-deployment kibana-service; do curl -o $file.yaml https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/$file.yaml; done
    Copy the code
  2. Reference links:

    • Kubernetes. IO/docs/concep…

    • Kubernetes. IO/docs/tasks /…

    • Congratulations to Fluentd on graduating from CNCF