“This is the 15th day of my participation in the First Challenge 2022. For details: First Challenge 2022”

K8s Learn 25-3, Deployment Upgrade application 2

Last time, we talked about upgrading POD by using RS manually, but it was very complicated and easy to make mistakes. In the actual production process, it would definitely not be done in this way, which was very dangerous

So today we will share the Deployment mode to display the upgrade application

Upgrade the application in the Deployment mode

It is quite tedious to have a feeling about the previous operation mode. We still need to switch the flow by ourselves, create new RS by ourselves, and even delete the old RS at last, which is very troublesome

Let’s play with a more advanced resource, which is also easier. For the sake of the following case clarity, we will delete all the above RS and leave the Service to be used later

Deployment upgrades the application using application declaration rather than RS or RC

Actually creating a Deployment resource will also create an RS resource, so what is Deployment for? We can think of it as coordinating resources

Practice the demo

Create a deploy, mydeploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: newkubia
spec:
  replicas: 3
  selector:
    matchLabels:
      app: newkubia
  template:
    metadata:
      name: newkubia
      labels:
        app: newkubia
    spec:
      containers:
      - image: xiaomotong888/newkubia:v1
        name: newkubia
Copy the code
  • Let’s create a Deployment
  • Named newkubia
  • There are 3 copies
  • The matching POD tag is NewKubia
  • The TEMPLATE for RS control is created with a container named Newkubia
  • The tag is also newkubia

kubectl create -f mydeploy.yaml –record

Now we create a Deployment with the above directive, which, like the other resources, we can abbreviate to deploy

What does “record” mean here? With this parameter deploy will record our historical version

Of course, we can also use the following command to check the status of deploy

kubectl rollout status deploy newkubia

Of course kubectl Describe, Edit, and GET are all applicable to deploy

We can see if we have created an RS

These pods above are actually created by this RS. We can identify them by the name of RS and POD

In the above examples:

RS:

newkubia-6597db5b9c

Pod:

newkubia-6597db5b9c-*

This random string is actually a hash calculated from Deployment together with the POD template in RS

Let’s change the Service tag used in the first example to newKubia, go into any pod, access the Service address, and see what happens

That’s what we expected, that’s what we wanted

Upgrade the application using Deployment

For Deployment upgrade applications, we need to know that Deployment involves two upgrade strategies:

  • RollingUpdate

A rolling upgrade strategy that incrementally removes old pods and creates new ones, all while keeping our service available. This is the default deploy upgrade strategy

  • Recreate

The old POD will be deleted at once and the new pod will be created. This strategy is similar to the way we talked about before. In effect, our service will be interrupted for a period of time, depending on your speed

In order for us to be able to visually see the upgrade process when we deploy, we set the minimum lead time to be larger so that we can see it more clearly.

Kubectl edit deploy newkubia, add minReadySeconds: 10 to deploy Spec and save

Of course, K8S also provides another patch method to easily modify a few fields in YAML:

kubectl patch deploy newkubia -p '{"spec":{"minReadySeconds":10}}'

The effect is to first create a new version of POD, then kill the old one and create a new one until the upgrade is ok

So let’s go into any container, access the SVC, and see what happens

It can be seen that the SVC that is accessed normally responds to the V2 version, which once again indicates that our upgrade is successful

We did not manually set the upgrade policy for Deployment. The default is RollingUpdate. We can check the details of deploy we created

During the whole upgrade process, we just executed a deploy upgrade command. It was super simple, and there was no need to manually add or delete RS, and no need to modify the label controlled by the Service. The Deployment helped us complete the upgrade with one click, so it was very efficient

Talk about rolling back

We will not only upgrade, if there is something wrong with the upgrade program, then won’t we have to upgrade the old version again?

How to upgrade? Is it the same as the upgrade? Of course not

We upgrade as simple as rollback can be as simple

To rollback, we can simply execute:

Kubectl rollout undo deploy newkubia Check the status of the deploy processCopy the code

Rollback success, let’s check rs and POD, here you can note the pod name and rs name characteristics

So how do you specify version rollback?

The update record is set to –record when deploy is created. The update record is set to –record when deploy is created.

kubectl rollout history deploy newkubia
Copy the code

Specifies the version rollback

kubectl rollout undo deploy newkubia --to-revision=xx
Copy the code

The specified version rollback is also successful. Here we can continue to check the characteristics of RS and POD in the same way as above, and then enter the POD to access the address of the Service to see if the effect is what we expect

Look at this, will there be these questions?

  • Why did the RS before we upgraded to v2 still exist?
  • Why does Deploy have upgrade records?

Here to share understanding:

The reason why deploy has an upgrade record is because we specified –record when we created deploy, so deploy records information when the version is upgraded

After upgrading the new version, the original RS still exists, this is not difficult to understand, this is for us to roll back or jump to the specified version, can directly use the original RS, the bottom to modify the number of copies

The whole process is managed like this:

Deploy manages multiple RS, and RS manages multiple pods

Deploy manages multiple versions of RS, which we can understand as follows:

Initially deploy managed one RS1, which managed multiple pods: V1

When we upgrade version V2, DEPLOY creates RS2, which manages Pod: v2, and RS1 remains

When we roll back, it is similar, but instead of creating a new RS, we use the RS of the version we want to roll back, for example to v1

Today is here, learning, if there is a deviation, please correct

Welcome to like, follow and favorites

Friends, your support and encouragement, I insist on sharing, improve the quality of the power

All right, that’s it for this time

Technology is open, our mentality, should be more open. Embrace change, live in the sun, and strive to move forward.

I am Nezha, welcome to like, see you next time ~