preface

The Deployment managed Pod allows multiple replicas to run on a node.

When you need to run containers on nodes to collect logs or perform monitoring tasks, it is obviously not a good idea to start multiple Pod copies.

In this scenario, we can enable the DaemonSet controller to manage the Pod.

Daemon Pod features

  1. The Pod runs on all or some nodes in the cluster
  2. There can only be one such Pod on each node
  3. When a new node is added to the cluster, a Pod is automatically created on the new node
  4. When a node is removed from the cluster, the Pod on the node is automatically reclaimed

Daemon Pod scenarios

  1. Agent of network plug-in
  2. The Agent that stores the plug-in
  3. Agent that monitors tasks
  4. Agent that collects logs

DaemonSet

Create a DS

cat daemonset.yaml

apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: kube-system labels: k8s-app: fluentd-logging spec: selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch spec: tolerations: # this toleration is to have the daemonset runnable on master nodes # remove it if your masters can't run pods - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd-elasticsearch image: Quay. IO/fluentd_elasticsearch fluentd: v2.5.2 resources: limits: memory: 200 mi requests: CPU, 100 MB of memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers # kubectl apply -f daemonset.yaml daemonset.apps/fluentd-elasticsearch createdCopy the code

To view

[root@master01 ~]# kubectl get ds --namespace kube-system
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
fluentd-elasticsearch   6         6         6       6            6           <none>                        11m
Copy the code

At present, there are 3 masters and 3 workers in the cluster, with a total of 6 nodes. It can be seen that the node is running on all 6 nodes

The working process

DaemonSet Controller, fetching the list of nodes from Etcd and traversing all nodes.

Then check if there is a Pod running on the Node with name= Fluentd-ElasticSearch tag.

And the results of the examination, there are probably three cases:

  1. Node does not have this Pod, which means that the Pod will be created on the Node.
  2. If the number of pods on a Node is greater than 1, remove the extra pods from the Node.
  3. If there is only one such Pod, the node is healthy.

The important parameters

spec.affinity.nodeAffinity

Through the command

# kubectl edit pod fluentd-elasticsearch-22g5r --namespace=kube-system
Copy the code

You can see that DaemonSet automatically adds parameters to the Pod

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchFields:
          - key: metadata.name
            operator: In
            values:
            - master03
Copy the code

Restricted this Pod to run on master03 Node only

tolerations

Using the same command above, you can see that there are parameters

  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/disk-pressure
Copy the code

This means that all nodes marked with the following “stain” can be tolerated, and you can run master/not-ready/unreachable/disk-pressure on these nodes

Normally, pods are prohibited from being scheduled to run on nodes marked with these “smudges.

But DaemonSet added tolerations to pods, allowing them to ignore these stains and successfully schedule pods to “stains” nodes

If a node failure causes Pod startup to fail, DaemonSet attempts continue until Pod is successfully started

Run Pod on only some nodes

Can manually specify in YAML. Spec. The template. The spec. Affinity, Pod will be run on the specified Node

nodeAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchFields:
      - key: metadata.name
        operator: In
        values:
        - target-host-name
Copy the code

If this parameter is not specified, Pod will run on all nodes

DaemonSet advantages

We can also write our own daemon to perform similar tasks. What are the advantages of using DaemonSet Pod?

  1. The DaemonSet Pod comes with its own monitoring capabilities, saving us the task of writing a daemon monitor ourselves
  2. DaemonSet Pod unified language and management no matter what tasks it runs
  3. DaemonSet Pod has its own resource limitation function, which avoids the impact of the daemon running for a long time occupying too many resources on the host computer

conclusion

DaemonSet tends to control exactly which machines the Pod runs on, ensuring that the Pod is running on those machines.

Deployment is better suited for managing stateless class Pods that are closer to the user, such as Web services.

What they have in common is that they don’t want the Pod they manage to die, and the Pod can heal itself when it goes wrong.

To contact me

Wechat official account: IT Struggling Youth