I. Understanding the application deployment process

In the previous section, we should analyze the phases of an application deployment process through the last deployment of a Nginx, which is also helpful for the subsequent operation process and troubleshooting problems.

Review our overall K8S infrastructure. :

  • The master node contains:
    • Kube-apiserver: responsible for accessing the entry, and receiving the relevant information commands, often communicating with etCD)
    • Etcd: Stores cluster status, node data, and other data information
    • Kube-scheduler: mainly responsible for node scheduling, Pod node scheduling management, monitoring node status, etc
    • Kube-controller-manager: Monitors cluster status, controlling nodes, replicas, endpoints, accounts, tokens, etc
    • Cloud-controller-manager: manages nodes, routes, services, and data volumes that control and interact with the cloud
  • Node Working nodes include:
    • Kubelet: Responsible for operating and managing pods
    • Kube-proxy: is responsible for the network proxy running on the node and maintaining the network forwarding rules of the node
    • Container runtime environment CRI: Responsible for implementing container technology that supports multiple K8SCris (not limited to Docker)

The diagram shows a simple process overview:

The overall steps are roughly as follows (just my personal understanding) :

1:DeploymentResource type object

Deployment is a K8S resource object, mainly used to provide a convenient mechanism for defining update RC and Pod. For better Pod choreography problems. It is understandable that Deployment is an implementation mechanism for managing pods.

1.1 Deployment Features:

  • You can define the number of copies of a set of pods

  • You can view the application deployment time and status

  • Maintain Pod number through Controller Controller (easy to roll back version operation, or automatic recovery of failed Pod in case of exception)

  • You can specify rolling upgrade policies and version control policies

    • The value of rolling upgrade refers to upgrading PODS one by one to avoid service interruption caused by upgrading all PODS iteratively at the same time.
  • Used to deploy stateless applications

  • Manage Pod and ReplicaSet

PS: stateless application: does not depend on the client data requested by the last session. The next session does not need to carry the data, and the application can be accessed

Stateful applications: Rely on client session data, and each new request needs to carry the original data

1.2 Deployment and ReplicaSet:

2:podResource type object

  • It is a way of organizing containers
  • A Pod can consist of only one container or multiple containers
  • All containers in a Pod share the same network (root container)
  • It is the smallest scheduling unit of K8s
  • Kubernetes represents a group of one or more application containers
  • Run on the Node working Node
  • The Pod is managed by the Kubelet process running on the worker node
  • Each Pod has its own Ip address

2.1 POD scheduling process:


  • 1. Submit a Replication Controller (RC) creation request via Kubectl (basically, the yamL resource application description is choreographed), and our request is written to the ETAD database after passing through APIServer

  • 2. The Controller Manager listens to the RC event and analyzes the POD instances according to the node situation. If the node in the current cluster does not have the POD object defined in the RC template, it will generate the pod instance described in the corresponding template according to the template. The instance information is written to the ETOD database via APlIServer

  • 3. Scheduler listens to find the scheduling instruction of RC template for POD instance, and the basic template requires to perform pod placement or start the scheduling process of relevant nodes. After the final placement is successful, the Scheduler continues to write the result into ETAD through API Server to store node information

  • 4. The Kubelet process running on the target Node monitors the Pod placement process through APIServer, and starts the Pod according to the relevant information defined by the template.

2.2 Pod creation process:

  • 1: First the RC template defines our Pod information
  • 2: Controller Manager listens to RC events to create pods and stores them in Etcd
  • 3: According to Scheduler scheduling management, the Master performs Pod scheduling and binds Node Node
  • 4: The Pod is scheduled to be instantiated by the Kubelet process on Node.
  • If a POD stops abnormally, K8S automatically detects and restarts the POD
  • If the Node where the Pod resides is abnormally faulty, the SYSTEM automatically dispatches the Pod from the faulty Node to other nodes to ensure that the expected number of Pod copies is sufficient

2.3 Pod Access process (Service object) :

Service is one of the K8S medium resource objects.

  • It exists mainly to provide unique access addresses for a series of PODS with the same identity and define access policies for a group of PODS

  • It uses a unique address (ClusterIP) that can only be accessed by containers in the cluster to Servier

  • A Service can select Pod object instances by defining Label Selector(Label query rule)

– It is a load balancer for pods, providing one or more stable access addresses for pods

  • It even supports multiple methods (e.g., ClusterIP, NodePort, LoadBalance)


  • 1. Create a Service resource type using Kubectl and map it to the Pod

  • 2. ControllerManager queries the associated Pod instance object with the Label we define, and generates Endpoints for the services of the current Pod set, which are then written to ETAD via APlServer

  • 3. The Proxy process running on Node queries and listens to the Service object and POD binding through APIServer to generate Endpoints information, and establishes a similar way to load balancer to realize traffic forwarding from Service to back-end POD

2.4 (Service and Label binding) in Pod:

Functions of Label:

  • Tag resource objects to summarize resource object groups and query and filter object resources

Label:

  • You can tag Pod, Service, RC, Node, and so on

  • A resource object can define multiple labels

  • A label can correspond to multiple resource objects

  • After a label is created, it can be added and deleted dynamically

  • The associations between Service, Deployments, and Pods are all implemented via label

  • So Node each Node can also have a label

  • The Pods can be associated to the node of the corresponding label by setting a label policy

2.5 Probe health monitoring mechanism of POD resource objects:

The official documentation address of the probe situation:

Kubernetes. IO/docs/tasks /…

Kubernetes. IO/useful/docs/tas…

Kubernetes. IO/useful/docs/tas…

  • StartupProbe:

    • If startuProbe is configured, other probes will be disabled until it succeeds. After that, no probes will be performed. This is a mechanism for checking the startup status of a Pod.
  • LivenessProbe (survival test) :

    • It detects whether the container is running

    • The health status of pod objects is determined according to user-defined rules. If livenessProbe detects that the container is unhealthy, Kubelet will determine whether the current unhealthy POD is restarted based on the POD restart policy at startup

    • If a container does not contain the livenessProbe, Kubelet assumes that the return value of the container’s livenessProbe is always successful.

  • ReadinessProbe (ready to check) :

    • It detects the health of the application within the container. If it returns success, the container has been started and the application is ready to accept traffic.

    • The health status of the POD object is determined according to user-defined rules. If the probe fails, the controller will remove the POD from the endpoint list of the corresponding service and will not dispatch any request to the POD until the next probe succeeds.

The role that Label plays in Pod:

  • When the Pod is defined, you can set the corresponding Label.

  • When you define a Service, you give it a Label Selector

  • During the access, the load balancing of Service is through the Kube-proxy process on the Node Node to select the corresponding Pod group object through the Label Selector defined in the Service. The request forwarding routing table of the Pod bound to the Service is automatically created.

3: Several forms of the controller

  • Deplotment: Stateless app deployment
  • StatefulSet: stateful application deployment
  • Daemonset: Ensure that all nodes are running a specified Pod
  • Job: indicates a one-time task resource type
  • Cronjob: indicates the scheduled task type
  • ReplicaSet: Ensure the expected number of Pod replicas (when using the Deplotment, we should not manually manage ReplicaSet replicas created by Deplotment)

2. Application deploymentyamlFile to know

From the introduction of the previous section, we can see that the following deployment of the application, related resource creation and deployment, basically write a YAML, and then directly update the application. A Ymal file is basically a description of the K8S resource object. Therefore, it is necessary to understand the field information of our YAML file:

1: Nginx example parsing:

Using YAML files for application deployment:

apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # Tells Deployment to run 2 Pods matching the template Strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 Template: Metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: -containerport: 80Copy the code

Field description:

  • ApiVersion Mandatory property. The string type is associated with our VERSION of K8S and usually needs to be changed based on the current version of Kubernetes installed and the resource type

  • The type of the string is the type of the resource. There are several types:

    • Deployment
    • Job
    • Ingress
    • Service
  • Metadata Mandatory attribute. The string type indicates the metadata information about the type of resource that the current KIND belongs to

    • Name Specifies the name of the resource type
    • Namespace Mandatory property. The value is a character string. Namespace Specifies the namespace to which the current resource type belongs
    • labels optional: User-defined labels
      • XXXXX: XXXX Indicates the name of the user-defined label
    • Annotations A list of custom annotations
      • xxxxxxx
  • Spec Detailed definition description of the resource (Resource Specification description)

    • A selector for the selector resource (using the tag to swipe the pod copy it belongs to)
      • MatchLabels: # matching labels used by the selector
        • App: name of the nginx # tag
  • Replicas Specifies the number of POD copies expected from the current resource

    • Strategy Replica strategy:
      • RollingUpdate replicas update policy: Because replicas are 2, the number of pods will be between 2 and 3.
      • MaxSurge Indicates the number of pods that are rolled during the upgrade
      • MaxUnavailable Maximum number of Unavilable pods allowed for rolling upgrades
  • Template The template used by the resource (mainly the image, container metadata description)

    • Metadata: Metadata information contained in a template

      • Labels of the Labels template
        • XXXXX: name of XXXX template – label –
    • Sepc # Container information under the current resource template, which can contain multiple containers

      • Containers: contains all of the container group information (similar to docker)
        • Name: # Container name

        • Image: # Image used by the container

        • ImagePullPolicy: Strategy mirrors [Always | Never | IfNotPresent] # obtaining image strategy Alawys says mirror download IfNotPresent said priority using local image, or download images, Nerver say use only local mirror

        • WorkingDir: The working directory of the container

        • CMD: the command to run the container

        • Args: parameter information received by the container

        • Ports, port

          • Name: XXXXX Container port name
          • ContainerPort Specifies the port number that the container listens to
          • HostPort: int # Specifies the port number monitored by the host accommodating the Container. The default port number is the same as that of the Container
        • Protocol: TCP # Indicates the port protocol, which supports TCP and UDP. TCP is the default

2: YamL format pod definition file complete content supplement

The following information is excerpted (thanks for the summary) : blog.csdn.net/w_y_x_y/art…

ApiVersion: v1 type: pod metadata: type: pod metadata: string: pod namespace: Annotations: String # Mandatory, the Pod belongs to the namespaces labels: # tags - name: string # tags # annotations: # tags - name: string spec: Containers: # mandatory. Name: string # mandatory. Image: string # Mandatory. [Always | Never | IfNotPresent] # obtaining image strategy Alawys says mirror download IfNotPresent said priority using local image, or download images, Nerver say use only local mirror the command: Args: [string] # workingDir: string # The working directory of the container volumeMounts: Name: string Name of the shared storage volumes defined by pod. Volumes [] String # Absolute path to mount the volume in the container. The value must be less than 512 characters. ReadOnly: Boolean # Whether the volume is in read-only mode. HostPort: int # Port number that the Container needs to listen on. The default value is the same as that of the Container. Protocol: string # Port protocol that supports TCP and UDP. Name: string # Name of the environment variable value: string # value of the environment variable Resources: # set resource limits and request limits: # Set resource limits CPU: String #Cpu limit, in cores, will be used for the docker run --cpu-shares parameter memory: String # Memory limits for docker run --memory requests (in Mib/Gib). String # Memory cleanup ·, the initial available number of container launches livenessProbe: Set the Pod health check to exec, httpGet, and tcpSocket. Set the Pod health check to exec, httpGet, and tcpSocket. [string] # httpGet: # set Path: string port: number host: String Scheme: string HttpHeaders: -name: string Value: string tcpSocket: # Set health check mode to tcpSocket port: TimeoutSeconds: 0 # Timeout period of waiting for a response to the container health check probe, in seconds. The default periodSeconds is 1 second. 0 # Set the detection time for container monitoring checks periodically, in seconds (default: 10 seconds) privileged:false restartPolicy: [Always | Never | OnFailure] # Pod restart strategy, Always said once no matter in what way to terminate operation, kubelet will restart, OnFailure said only Pod to non-zero exit code exit to restart, NodeSelector: obeject # Set nodeSelector: schedule the Pod to the node that contains the label in the key: value format imagePullSecrets: #Pull the secret name used in the image, in the key: secretkey format. The default value is false. If you set this value to true, you need to use the hostNetwork volumes to volumes. EmptyDir: {} # emtyDir is a temporary directory with the same life cycle as Pod. HostPath: string # The volume whose type is hostPath indicates the directory where Pod is mounted. Path: string # The volume whose type is hostPath indicates the directory on which Pod is mounted. Scretname: string items: -key: string Path: string configMap: Name: string items: -key: stringCopy the code

Apply some validation

1. Failover test validation

Before, our node services were in normal state, and the operation of our Pod was also distributed on their original nodes, such as:

Check pods information to learn:


[root@k81-master01 k8s-install]# kubectl get pods -o wide

Copy the code

The main focus is on the Pods running on Node2:

After shutting down our Node2 node (which will take a while), watch our Pods:

And take a look at our corresponding node:

Node2 has failed. Node3 has run 4 Pods. After a while:

Even though our Node2 node was restored, it did not automatically restart our Pod on Node2.

2. Capacity expansion test verification

Nginx-deployment (nginx-Deployment) deployment (nginx-Deployment)

[root@k81-master01 k8s-install]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES Chaoge-Nginx-586DDCF5C4-BG8F4 1/1 Running 0 12m 10.244.2.13 K81-node03 < None > < None > Nginx-deployment-574b87c764-4q5ff 1/1 Running 0 3d20h 10.244.2.11 k81-node03 <none> <none> Nginx-deployment-574b87c764-745gt 1/1 Running 0 3d20h 10.244.2.10 k81-node03 <none> <none> Nginx-deployment-574b87c764-cz548 1/1 Running 0 9m54s 10.244.2.15 k81-node03 <none> <none> Nginx-deployment-574b87c764-j9bps 1/1 Running 0 9m54s 10.244.2.16 k81-node03 <none> <none> zyx-nginx-5C559d5697 -dhz2z 1/1 Running 0 5d22h 10.244.2.3k81-node03 <none> <none> zyx-nginx-5C559d5697 - QVKJL 1/1 Running 0 9m54s 10.244.2.14 k81-node03 <none> <none> [root@k81-master01 k8s-install]# ^C [root@k81-master01 k8s-install]#Copy the code

We currently have 4 pods running on Node3, so now we need to expand:

Expand capacity by running commands:

[root@k81-master01 k8s-install]# kubectl scale deployment nginx-deployment --replicas=5 deployment.apps/nginx-deployment  scaled [root@k81-master01 k8s-install]#Copy the code

A new Pod is called on Node2:

3. Maintain the verification of the number of copies

Delete a Pod on Node3 and observe:


[root@k81-master01 k8s-install]# kubectl delete pods nginx-deployment-574b87c764-4q5ff
pod "nginx-deployment-574b87c764-4q5ff" deleted

Copy the code

To view:

4. Specify the Pod to deploy the node

Sometimes we need to specify the way to run some of our services on that node, at this point, the old age needs to do the relevant way to label to achieve:

  • Step 1: Label the Nodes
  • Step 2: When deploying the application, use NodeSelect to specify the node’s label.

Example:

Execute to view all labels on the Masters node:

[root@k81-master01 k8s-install]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k81-master01 Ready Master 6 d2h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k81-master01,k Ubernetes. IO/OS = Linux, node - role. Kubernetes. IO/master = k81 - node02 Ready < none > 6 d2h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k81-node02,kub Ernetes. IO/OS = Linux K81 -node03 Ready < None > 6d2H v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k81-node03,kub ernetes.io/os=linux [root@k81-master01 k8s-install]#Copy the code

Find out which node we need to label: for example, we need to label our K81-node2:1: label the node

[root@k81-master01 k8s-install]# kubectl label node 192.168.219.139 name=xiaozhong-node Error from server (NotFound): Nodes "192.168.219.139" not found # Correct way [root@k81-master01 k8s-install]# kubectl label node k81-node02 name=xiaozhong-node node/k81-node02 labeled [root@k81-master01 k8s-install]#Copy the code

PS: Delete the specified label can be used – can

[root@k81-master01 k8s-install]# kubectl label node k81-node02 name-
node/k81-node02 labeled
[root@k81-master01 k8s-install]#

Copy the code

2: After labeling, check our label information:

[root@k81-master01 k8s-install]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k81-master01 Ready Master 6 d2h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k81-master01,k Ubernetes. IO/OS = Linux, node - role. Kubernetes. IO/master = k81 - node02 Ready < none > 6 d2h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k81-node02,kub Ernetes. IO/OS = Linux,name=xiaozhong-node k81-node03 Ready < None > 6d2h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k81-node03,kub ernetes.io/os=linux [root@k81-master01 k8s-install]#Copy the code

To view the corresponding label name:

3: Define our stateless service YAML file information:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: xiaozhong-nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
      nodeSelector:
        name: xiaozhong-node

Copy the code

4: Starts to deploy applications

[root@k81-master01 k8s-install]# kubectl apply -f xiaozhong-ceshinging.yaml
deployment.apps/xiaozhong-nginx-deployment created
[root@k81-master01 k8s-install]#
Copy the code

5: View the deployment result

Delete the Deployment application

PS: Delete the Deployment application, and both ReplicaSet and POD will be deleted!

Delete command:


[root@k81-master01 k8s-install]# kubectl delete deployment xiaozhong-nginx-deployment
deployment.apps "xiaozhong-nginx-deployment" deleted
[root@k81-master01 k8s-install]#

Copy the code

Access applications through Service

1. Summarize some key points of the service

  • Both Pod startup and destruction cause its own IP to be recreated
  • Use Service to associate a group of pods, regardless of Pod IP changes
  • Decouple associations between pods
  • Implement Pod service discovery and load balancing
  • The service associates associated Pod object instances with tags
  • After acting on Proxy, access the service and forward the corresponding Pod

2. Classification of service objects (personal understanding and sorting)

  1. ClusterIP type (for intra-cluster communication only)
  2. NodePort type (access requests from outside the cluster, if the cluster needs external access, but only to nodes)
  3. LoadBalancer type (k8S directly calls the LoadBalancer created in the cloud when working in the cloud)
  4. ExternalName type (to introduce services outside the cluster into the current K8S cluster for easy use within the cluster)
  • ClusterIP type: This type is the default and provides an internal virtual IP address for Pod communication. This virtual IP address is equivalent to the middle layer that defines a fixed IP address for a group of PODS. This decouples the Ip affinity of the Pod.

    In K8S, all services can be provided in the following format:

    Name of the SVN Namespace to which the SVN belongs svc.cluster.local

    Do service discovery

    You can run the following command on a node: curl SVN name Namespace to which the SVN belongs svc.cluster.local You can also access pods in the cluster

  • NodePort type (external access to the Pod of a Node, using the host port): When we need external access to the Pod service of the cluster, we can use the NodePort (using the host port), based on the IP address of a Node + port (NodePort). We can forward our external request broker to the Service port on the Cluster IP of the responding Service object, and then the Service object proxies the request to the PodIP of the Pod group object in the Service and the port on which the application listens.

    NodePort type The process is as follows: Client—->Node IP address: port (NodePort) –>Cluster IP address :Service port –> IP address of a Pod distributed to a container + port of a container ContainerPort

    Access through the NodePort of the Service. In this case, the same port is listened on all nodes, for example, 30000. Traffic from the node is redirected to the corresponding Service object

  • LoadBalancer type: The NodePort type is limited to the IP address +NodePort of a node to access the Service of a specified node. The distributed access of the entire Service cluster cannot be implemented. Therefore, if the entire cluster node needs to be accessible, it also needs a unified external interface, so it needs a LoadBalancer service to act as a LoadBalancer for the whole cluster node, and then distribute the NodePort service on different nodes.

  • ExternalName: If a Pod needs to access a service outside the cluster, the externalName resource object can map an external service inside the cluster to access a Pod inside the cluster. The Pod in the cluster can directly access the coreDNS address in the cluster by mapping an external domain address to an internal address resolved by coreDNS.


The above is just a personal combination of their own actual needs, do study practice notes! If there are clerical errors! Welcome criticism and correction! Thank you!

At the end

END

Jane: www.jianshu.com/u/d6960089b…

The Denver nuggets: juejin. Cn/user / 296393…

Public account: wechat search [children to a pot of wolfberry wine tea]

Let students | article | QQ: welcome to learn communication 】 【 308711822