This is the 12th day of my participation in Gwen Challenge

Some concepts Affinity correspond to the pod. Spec. Affinity field

Affinity and anti-affinity Affinity can be divided into positive Affinity, which indicates that Affinity is willing to be allocated to the target node, and reverse anti-affinity, which indicates that Affinity is not willing to be allocated to the target node.

NodeAffinity and PodAffinity are affinity policies based on node labels. PodAffinity is the ability to assign a pod to a node based on the pod label.

Soft policy and Hard Policy The soft policy field starts with preferred, indicating that the preferred policy is achieved as far as possible. If the preferred policy is not met, it does not matter. Multiple soft policies have their own weights. The hard policy field begins with required, indicating that it must be met.

Because the concepts in this section are relatively easy to understand, get started. Kubectl explain pod.spec. Affinity is recommended to check the meanings of each field.

All of the following YAML files are hosted in my Github repository

NodeAffinity Actual hard policy The following yaml file, test-nodeaffinity-hard.yaml, is used to verify the hard policy of nodeaffinity

apiVersion: v1
kind: Pod
metadata:
  name: test-nodeaffinity-hard
  labels:
    app: app1
spec:
  containers:
    - name: mynginx
      image: mynginx:v2
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                  - k8s-node3

Copy the code

Here matchExpressions is matched with node labels, and the bottom three fields are as follows

field type instructions
key string Key of the node label
operator string Judgment sign, can choose In/NotIn the Exists/DoesNotExist/Gt/Lt
values list A set of values judged in conjunction with the operation symbol above
Node labels can be viewed as follows
“`[root@k8s-master affinity]# kubectl get node –show-labels
NAME STATUS ROLES AGE VERSION LABELS
K8s-master Ready Master 14D V1.15.11 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kub ernetes.io/os=linux,node-role.kubernetes.io/master=
14 d v1.15.11 k8s rac-node1 Ready beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kube rnetes.io/os=linux
I have two nodes whose hostname is not K8S-Nodes. There are no nodes for this pod to assign under this hard policyCopy the code

[root@k8s-master affinity]# kubectl apply -f test-nodeaffinity-hard.yaml pod/test-nodeaffinity-hard created [root@k8s-master affinity]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-nodeaffinity-hard 0/1 Pending 0 6s

This is to modify the POD informationCopy the code

apiVersion: v1 kind: Pod metadata: name: test-nodeaffinity-hard labels: app: app1 spec: containers: – name: mynginx image: mynginx:v2 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: – matchExpressions: – key: kubernetes.io/hostname operator: In values: – k8s-master

Restart, no matter how many times you start it, will only be on the master nodeCopy the code

[root@k8s-master affinity]# kubectl apply -f test-nodeaffinity-hard.yaml pod/test-nodeaffinity-hard created [root@k8s-master affinity]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-nodeaffinity-hard 1/1 Running 0 5s 10.244.0.19k8s-master

I don't have enough nodes here so I'm going to choose the master node, because normally master nodes don't allocate podsCopy the code