preface

In Kubernetes, we introduce the configuration options for rolling updates of services. This article lists the most common configuration items and explains how they affect the scheduler’s rolling updates of services. We also explain how the Ready status of the Pod logical unit in Kubernetes project is determined. The Pod is not Ready once the container is running. In short, I think it is a very good popularization of Kubernetes based article, the article is completely translated by myself by hand, try to be smooth and easy to understand, good English students can directly see the original.

Kubernetes Deployments/Kubernetes Deployments Rolling Update Configration.

Release time: February 26, 2020

The original link: www.bluematador.com/blog/kubern…

Article by Keilan Jackson

Translator: Keivn Yan

Deployment is a common Pod controller in Kubernetes that provides fine-grained overall control over pods: how to configure them, how to perform Pod updates, how many pods to run, and when to terminate them. There are many resources that will teach you how to configure Deployment, but it may be difficult to understand how each option affects how rolling updates are performed. In this blog post we will cover the following topics to help you become an expert in Kubernetes Deployment:

  • Overview of Kubernetes Deployment;
  • Rolling updates to Kubernetes service;
  • How to definePodIs Ready;
  • Pod Affinity and anti-affinity;

The general picture of Deployment

Deployment is essentially a wrapper on ReplicaSet. ReplicaSet manages the number of pods in operation, and Deployment implements on top of it the ability to roll over Pod updates, perform health checks on Pod health and easily roll back updates. During a regular run, Deployment manages only one ReplicaSet to ensure that the desired number of PODS are running:

In Kubernetes, we should not operate the ReplicaSet created by Deployment directly. All operations performed on ReplicaSet should be performed on Deployment, and then the Deployment management should update the ReplicaSet. Here are some examples of kubectl commands that are typically performed on Deployment:

#Lists all deployments under the default namespace
kubectl get deploy

#Update the Deployment with the definition file
kubectl apply -f test.yaml

#monitoring"test"This Deployment status update
kubectl rollout status deploy/test

#suspended"test"The Deployment update process:
kubectl rollout pause deploy/test

#restore"test"The Deployment update process:
kubectl rollout resume deploy/test

#To view"test"Update history of this Deployment:
kubectl rollout history deploy/test

#The fallbacktestRecent Updates
kubectl rollout undo deploy/test

#testThis Deployment rolls back to the specified version
kubectl rollout undo deploy/test --to-revision=1
Copy the code

Pod rolling update

One of the main benefits of using Deployment to control Pod is the ability to perform rolling updates. Rolling updates allow you to gradually update the Pod configuration, and Deployment provides a number of options to control the rolling update process.

The most important option for controlling rolling updates is the update policy. In the Deployment YAML definition file, the Pod rolling update policy is specified by the spec.strategy.type field, which has two optional values:

  • RollingUpdate (default) : Create a new Pod step by step, while phasing out the old Pod and replacing it with the new Pod.
  • Concepts: Before a new Pod can be created, all old pods must be completely terminated.

In most cases, RollingUpdate is the Deployment’s preferred update strategy. The Concept update strategy can be useful if your Pod needs to run as a singleton and does not allow multiple copies at any one time.

There are also two options that allow you to fine-tune the update process when using the RollingUpdate policy:

  • maxSurge: Allows the creation of objects that exceed the expected state definition during updatesPodThe maximum number.
  • MaxUnavailable: Maximum number of pods that will be allowed to be unreachable during an update

Both the maxSurge and maxUnavailable options can be configured as integers (e.g., 2) or percentages (e.g., 50%), and neither can be zero at the same time. When specified as an integer, represents the number of pods that are allowed to be created late or inaccessible. When specified as a percentage, the number of pods defined in the desired state is used as the base. For example, if the default value of 25% is used for both maxSurge and maxUnavailable, and the update is applied to a Deployment with 8 containers, then the maxSurge will be 2 containers and maxUnavailable will be 2 containers. This means that during the update process, the following conditions will be met:

  • Ten at mostPod(Specified in the 8 expected statesPodAnd twomaxSurgeAllows overdue creationPodIs in the Ready state during the update process.
  • At least sixPod(Specified in the 8 expected statesPodAnd twomaxUnavailableAllow inaccessiblePod) will always be Ready.

It is important to note that when considering the number of PODS that Deployment should run during the update, the number of copies specified in the updated version of Deployment is used, rather than the number specified in the expected state of the existing Deployment version.

There is another way to think about these two options: maxSurge is the maximum number of new pods that will be created at one time, and maxUnavailable is the maximum number of old pods that will be deleted at one time. Let’s look specifically at the process of updating a Deployment with 3 copies from “v1” to “v2” using the following update strategy:

replicas: 3  
strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
Copy the code

The update strategy is, we want to create one at a timePodAnd should always beDeploymentIn thePodThree of them are Ready. The following gifs illustrate what happens at each step of the scrolling update. ifDeploymentseePodIt’s fully deployed and it’s gonna bePodMarked Ready, under creationPodMarked asNotReady, is being deletedPodMarked as Terminating.

How to determine whether a Pod is Ready

Kubernetes itself implements a concept called Ready Pod to assist with rolling updates. In particular, the ReadinessProbe enables Deployment to progressively update pods and also controls when rolling updates can be made. It is also used by services to determine which Pods should be included in Service Endpoints. Ready probes are similar but not the same as active probes in that active probes enable Kubelet to determine which PODS need to be restarted based on its “restart strategy” and they are configured separately from ready probes without affecting the rolling update process of Deployment.

Translator’s note: A detailed explanation of ready and active probes can be found in my previous article:Brief analysis of Kubernetes Pod restart policy and health check

A Ready Pod is one that has passed the Ready probe test and is considered Ready if the Pod has passed spec.minReadySeconds since its creation. The default values of these options will cause the Pod to enter the Ready state immediately after the container inside the Pod is started.

In fact, there are several reasons why you generally don’t want the Pod to be Ready immediately after the container starts:

  • It is hoped that before receiving traffic,PodBe able to pass a health check first.
  • The service needs to warm up before it provides traffic.
  • You want to slow down deployment to reduce the impact on running systems.

For Web applications, requiring a health check is very common, which is critical to performing updates with minimal disruption. Here is a ready probe configured for health check of a Web application:

readinessProbe:
          periodSeconds: 15
          timeoutSeconds: 2
          successThreshold: 2
          failureThreshold: 2
          httpGet:
            path: /health
            port: 80
Copy the code

This probe requires that the call to /health on port 80 complete successfully within 2 seconds, every 15 seconds, and that 2 successful calls must be made before the Pod is ready. This means that in the best case, the Pod will be ready in about 30 seconds. Many applications do not immediately serve even simple requests within two seconds of launching, so you should be prepared for the failure of the first one or two checks, in which case it actually takes about 60 seconds of preparation time for the Pod to be Ready.

You can also configure ready probes that execute commands on containers. This allows you to write executable custom scripts and determine if Pod is ready and Deployment can proceed with rolling updates:

readinessProbe:
          exec:
            command:
              - /startup.sh
          initialDelaySeconds: 5
          periodSeconds: 15
          successThreshold: 1
Copy the code

In this configuration, the Deployment waits 5 seconds after the Pod is successfully created and then executes commands every 15 seconds. An exit code of 0 for the script is considered successful. The flexibility of using command scripts allows you to do things like load data into the cache or warm up the JVM, or perform health checks on downstream services without modifying application code.

The final scenario we’ll discuss here is deliberately slowing down the update process to minimize the impact on the system. It may not seem necessary at first glance, but it can be useful in some situations, including event processing systems, monitoring tools, and pods that take a long time to warm up. This can be easily achieved by specifying the spec.minreadyseconds field in the Deployment definition. When minReadySeconds is specified, the Pod must run for this many seconds without any of its containers crashing before it can be considered Ready by Deployment.

For example, suppose a Deployment manages five Pod copies that read from the event stream, process the events, and then store the data in the database. Each Pod needs 60 seconds to warm up before it can handle events at full speed. If you use the default option value, pods will be Ready immediately after creation, but they will be slow to handle events in the first minute. After the update is complete, events will pile up in the event stream because all pods are warmed up at the same time. Instead, you can set maxSurge to 1, maxUnavailable to 0, and minReadySeconds to 60. This will ensure that a new Pod is created one at a time, that the new Pod is not Ready until it has been warmed up for one minute, and that the old Pod is not deleted until the new Pod is Ready. This way, all pods can be updated in about five minutes and the service remains stable during the update.

Pod Affinity and Anti – Affinity

The PodAffinity and PodAntiAffinity configurations give you control over which nodes Deployment pods are scheduled to. Although this capability is not specific to the Deployment controller, it can be very useful for many applications. When configuring PodAffinity or PodAntiAffinity, you must select a scheduling preference to add to your new Pod under different environmental conditions. There are two options:

  • requiredDuringSchedulingIgnoredDuringExecution: unless the node andAffinityThe configuration matches, otherwise, even if there is no matching node, the direct scheduling failure will not bePodScheduling to a node that does not match.
  • preferredDuringSchedulingIgnoredDuringExecution: The scheduler will attempt to schedule the Pod on a node that matches the configuration, but will still do so if it cannotPodScheduling is on another node.

PodAffinity is used to schedule pods to certain nodes. If you know that a Pod has specific resource requirements (for example, only a specific set of gPU-bearing nodes or nodes in an area), then a podAffinity configuration is usually required. Or you want pods to be juxtaposed with other pods on a node:

affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - cache
        topologyKey: "kubernetes.io/hostname"
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - web
          topologyKey: kubernetes.io/hostname
Copy the code

In contrast, podAntiAffinity is useful in ensuring that pods that belong to the same Deployment are not scheduled to the same node. The example above favors deploying pods labeled APP: Web to different nodes, reducing the likelihood that all the pods of the service will fail at the same time due to a node failure.

When using the Affinity configuration, it is important to note that the Affinity rule is evaluated at the time the Pod is scheduled. The scheduler cannot foresee where the Pod is scheduled, which means the Affinity configuration may not have the desired effect. Consider a three-node cluster, a Deployment with three Pod copies that uses the sample affinity rule above, and Deployment configates maxSurge to 1. Your desired scheduling goal might be to run one Pod from Deployment on each node, but since the maxSurge is set to 1, the scheduler can only create one new Pod at a time during rolling updates. This means that over time, you might end up with a node that updates without any of these pods, and then all or most of the pods will move to that node with the next update. The scheduler does not know that you are terminating the old Pod and still counts the old Pod when making the affinity rule judgment, which leads to the possible situation described above. If you really need to run only one Pod copy per node, use the DaemonSets controller. Another option is to change the update policy to “recreate” if it is acceptable to your application. This way, when the scheduler evaluates the Affinity rule, the old Pod will not be counted.

PodAffinity and PodAntiAffinity have a number of options to influence how pods are scheduled, but there is usually no guarantee that rolling updates will satisfy the rules. This is a very useful feature in some cases, but unless you really need to control where the Pod runs, you should let the Kubernetes scheduler make these decisions. Detailed documentation of podAffinity and podAntiAffinity can be found here.

conclusion

We have covered the basic usage of Deployment, how rolling updates work, and the many configuration options for fine-tuning updates and Pod scheduling. At this point, you should be able to confidently create and modify the Deployment definition files to achieve the desired state of the application using update strategies, ready probes, and AFFINITY. For a detailed reference to all the options supported in Deployment, see the Kubernetes documentation.

If you like my article, please give me a thumbs up. I will share what I have learned and seen and first-hand experience in technical articles every week. Thank you for your support. Wechat search concerns the public number “network management talking BI talking” every week to teach you an advanced knowledge, there are special for the development of engineers Kubernetes tutorial.