Abstract:A cloud is made up of many small water droplets. Think of each computer as a small water droplet, which unites to form a cloud. Usually drops come first, then platforms (OpenStack, Kubernetes) to manage the drops.

Cloud computing — independent universe

1. A cloud is made up of many small water droplets. Think of every computer as a small water drop. The traditional water drop is VM; Docker’s appearance has changed the particle size of small water droplets

2. Water droplet can run independently, internal integrity (such as VM, Docker container)

3. Generally drops come first, then platforms to manage the drops (OpenStack, Kubernetes)

Kubernetes

1.Kubernetes is an open source, used for the management of container applications on multiple hosts in the cloud platform, the goal of Kubernetes is to make the deployment of container applications simple and efficient,Kubernetes provides a mechanism of application deployment, planning, updating and maintenance

2. One of the core features of Kubernetes is the ability to manage containers in the cloud platform to ensure that the containers in the cloud platform are running according to the user’s expectations (for example, if the user wants dlCatalog to run, the user does not need to care how to do it, Kubernetes automatically monitors it, restarts it, creates it, etc.). Make dlCatalog always provide services)

3. In Kubenetes, all containers run in a Pod, and a Pod can hold one or more related containers

Kubernetes

1.Pod

In Kubernetes, the smallest management element is not a separate container, but a Pod; A Pod is the “logical host” of a container environment. A Pod is composed of multiple related containers that share disks. Ports between containers must not be repeated in the same Pod, otherwise the Pod will not get up, or restart indefinitely

2. Node

Node is the actual host on which Pod runs. It can be a physical machine or a virtual machine. To manage pods, each Node must run at least the Container Runtime (such as Docker), Kubelet, and Kube-Proxy services. Nodes are not created by Kubernetes per se. Kubernetes manages nodes’ resources. Although it is possible to create a Node object through manifest (as shown in json below), Kubernetes simply checks to see if there is such a Node, and does not schedule the Pod upwards if the check fails

{” kind “:” Node “, “apiVersion” : “v1”, “metadata” : {” name “:” 10.63.90.18 “, “labels” : {” name “:” my – first – k8s – Node “}

         }

}

3. Service

Service is an abstract concept, which is the essence of K8s. Each App on K8s can apply for a “name” within the cluster to represent itself. K8s will then assign your App a Service license with a “fake IP” on it. Any access to this IP from within the cluster is equivalent to access to your App

Suppose we have some Pods, each with port 9083 open, and each with a tag app=MyApp; The following json code creates a new Service object named my-dlcatalog-metastore-service and connects to the target port 9083. The Pod with the tag app=MyApp will be assigned an IP address, which is used by the Kube-proxy. If you access this IP address inside the cluster, you can access your app. It’s important to note that the actual IP of the Pod in K8s is generally useless, right

kind: Service,

apiVersion: v1,

metadata:

name: my-dlcatalog-metastore-service

spec:

selector:
app: MyApp

ports: – protocol: TCP,

port: 20403,

targetPort: 9083

4. ConfigMap

ConfigMap Is a key-value pair used to save configuration data. It can be used to save a single attribute or a configuration file. ConfigMap is similar to Secret, but it makes it easier to handle strings that don’t contain sensitive information;

Use volume to mount ConfigMap directly as a file or directory

Mount the ConfigMap to the /etc/config directory of Pod

apiVersion: v1

kind: Pod

metadata:

name: vol-test-pod

spec:

containers: - name: test-container image: 10.63.30.148:20202 / ei_cnnroth7a/jwsdlcatalog - x86_64:1.0.1.20200918144530 command: [ "/bin/sh", "bin/start_server.sh" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config restartPolicy: Never

Kubernetes resource fancy scheduling

Specify Node Node scheduling

There are three ways to specify that Pod only runs on specified Node nodes

A:

NodeSelector: Schedules only nodes that match the specified label

Method 2:

NodeAffinity: Richer Node selectors, such as support for collection operations

NodeAffinity currently supports two types: RequiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution, representing must satisfy the conditions and the optimum conditions

For example, the following example represents a Node with the tag http://kubernetes.io/e2e-az-name and the value e2E-az1 or E2e-az2. The node with the label other-node-label-key= other-node-label-value is preferred

apiVersion: v1

kind: Pod

metadata:

name: with-node-affinity

spec:

affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/e2e-az-name operator: In values: - e2e-az1 - e2e-az2 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: another-node-label-key operator: In values: - another-node-label-value containers: - name: with-node-affinity image: 10.63.30.148:20202 / ei_cnnroth7a/jwsdlcatalog - x86_64:1.0.1.20200918144530

Three:

PodAffinity: the Node is scheduled with qualified PODS

PodAffinity Selects nodes based on Pod labels and schedules them only to nodes that meet Pod conditions. PodAffinity and podAntiAffinity are supported

This feature is a bit convoluted, so here are two examples to illustrate:

The first example shows:

If a Node is in a Zone that contains at least one running Pod with security=S1 tag, then the Node can be scheduled. Not scheduled to a Node containing at least one running Pod with security=S2 tag

apiVersion: v1

kind: Pod

metadata:

name: with-pod-affinity

spec:

affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - S1 topologyKey: failure-domain.beta.kubernetes.io/zone podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: security operator: In values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-node-affinity image: 10.63.30.148:20202 / ei_cnnroth7a/jwsdlcatalog - x86_64:1.0.1.20200918144530

The second example shows:

If a Node is in a Zone that contains at least one running Pod with appVersion= JWSDLCatalog -x86_64-1.0.1.20200918144530 tag, it is not recommended to schedule the Node. Not scheduled to a Node containing at least one running Pod with app= JWSDLCatalog -x86_64 tag

Spec: restartPolicy: Always #pod restartPolicy

runAsUser: 2000 fsGroup: 2000 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: appVersion operator: In values: - concat: - get_input: IMAGE_NAME -' -' -get_input: IMAGE_VERSION #numOfMatchingPods: "2" "failure-domain.beta.kubernetes.io/zone" weight: 100 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - get_input: IMAGE_NAME numOfMatchingPods: "1" topologyKey: "kubernetes.io/hostname" containers: - image: concat: - get_input: IMAGE_NAME: IMAGE_NAME: IMAGE_NAME: IMAGE_NAME: IMAGE_NAME: IMAGE_NAME: IMAGE_NAME: IMAGE_NAME: IMAGE_NAME: IMAGE_NAME: Name: jWSdlCatalog name: jWSdlCatalog

Note: this article is purely personal opinion, if some pictures are similar, it is purely accidental

Click follow to learn about the fresh technologies of Huawei Cloud