1 Pod – instance

Pod is a set of closely related containers, supporting multiple containers in a Pod to share the network and file system, through interprocess communication and file sharing such a simple and efficient way to complete services, is the basic unit of Kubernetes scheduling. Pods are designed with the idea that each Pod has a unique IP


Pod has the following characteristics:

  • Containers containing multiple shared IPC, Network, and UTC namespaces that communicate directly with localhost

  • All containers within pods have access to the shared Volume and can access shared data

  • Graceful termination: WHEN a Pod deletes, it sends a SIGTERM to the process in it and waits a grace period before forcing it to stop the process that is still running

  • Privilege containers (configured through SecurityContext) have the permission to change system configurations (used extensively in network plug-ins)

  • Three restart policies are supported: Always, OnFailure, and Never

  • Three image pull policies are supported: Always, Never, and IfNotPresent

  • Kubernetes uses CGroup to limit the CPU and memory resources of the container, and can set request and limit values

  • Health check: Provides two kinds of health check probes, namely livenessProbe and redinessProbe. The former probes whether the container is alive and restarts according to the restart policy if the probe fails. The latter checks whether the container status is normal

  • Init Container is executed before all containers are run and is used to initialize configurations

  • The container lifecycle hook function listens for specific events in the container lifecycle and executes registered callback functions when the event occurs. It supports two hook functions: postStart, which is executed after the container is started, and preStop, which is executed before the container is stopped


2 Namespace – Indicates the Namespace

A Namespace is an abstract collection of resources and objects that can be used, for example, to divide objects within a system into different project groups or user groups. Common examples such as POD, Service, replicaSet, and Deployment all belong to a namespace (default), while Node, persistentVolumes, and so on do not belong to any namespace.

Common namespace operations:

# create all namespaces
kubectl get namespace
# to create namespace
kubectl create namespace ns-name 
# to delete the namespace
kubectl delete namespace ns-nameCopy the code

Note the following when deleting a namespace:

  1. Deleting a namespace automatically deletes all resources belonging to the namespace.

  2. The default and kube-system namespaces cannot be deleted.

  3. PersistentVolumes do not belong to any namespace, but PersistentVolumes belong to a specific namespace.

  4. Whether Events belong to a namespace depends on the object that produced Events.

3 the Node Node

Node is the host on which the Pod actually runs. It can be a physical machine or a virtual machine. Node is not created by Kubernetes per se, Kubernetes just manages the resources on Node. To manage pods, each Node must run at least the Container Runtime (Docker), Kubelet, and Kube-Proxy services.


Common Node operations:

# query all nodes
kubectl get nodes
# flag node as unschedulable
kubectl cordon $nodename
# mark Node as schedulable
kubectl uncordon $nodenameCopy the code

Taint (stain)

The kubectl taint command is used to taint a Node. The taint creates an exclusive relationship between Node and Pod, allowing Node to reject Pod scheduling, or even eject existing pods from Node. Taint effect has three options: key=value:effect

  • NoSchedule: indicates that K8S will not schedule pods to nodes with this stain

  • PreferNoSchedule: indicates that K8S will try to avoid scheduling a Pod to a Node with the PreferNoSchedule

  • NoExecute: indicates that K8S will not dispatch pods to the Node with the stain and expel existing pods from the Node


Common commands are as follows:

# set unschedulable stains for node node0
kubectl taint node node0 key1=value1:NoShedule
# remove key1 stain from node0
kubectl taint node node0 key-
# set unschedulable stains for kube-master nodes
kubectl taint node node1 node-role.kubernetes.io/master=:NoSchedule
# set the kube-master node to be as unscheduled as possible
kubectl taint node node1 node-role.kubernetes.io/master=PreferNoScheduleCopy the code

Tolerance (Tolerations)

A Node with a taint will have a mutually exclusive relationship between NoSchedule, PreferNoSchedule, NoExecute, and Pod based on taint’s effect, and Pod will not be scheduled to Node to some extent. But Toleration can be set up on pods, which means that pods with Toleration will tolerate stains and can be assigned to nodes with stains.


4 Service Service

Service abstracts a set of Pods that provide the same functionality and provides a unified portal for them to easily discover and load balance applications and upgrade applications with zero downtime. A Service selects a back-end Pod by label, and typically collaborates with ReplicaSet or Deployment to ensure the normal operation of the back-end container.


Service types are as follows. The default service type is ClusterIP:

  • ClusterIP: The default type. It automatically assigns a virtual IP address that can be accessed only within the cluster

  • NodePort: Binds a port on each machine for the Service on a ClusterIP basis, so that the Service can be accessed through ‘NodeIP:NodePort’

  • LoadBalancer: creates an external LoadBalancer based on the NodePort using the cloud provider and forwards the request to NodeIP:NodePort

  • ExternalName: forwards the service to the specified domain name in DNS CNAME record mode


Alternatively, you can add an existing Service to the Kubernetes cluster as a Service by creating the Service without specifying a Label selector and manually adding an endpoint to the Service after it is created.


5 Volume Storage Volume

By default, container data is non-persistent and will be lost after the container dies. Therefore, Docker provides Volume mechanism to store data persistently. Kubernetes provides a more powerful Volume mechanism and plug-ins to solve the problem of container data persistence and data sharing between containers.


The Kubernetes storage volume life cycle is bound to Pod

  • When Kubelet restarts the container after the container hangs, Volume’s data is still there

  • The Volume will be cleared only when Pod is deleted. Whether data is lost depends on the Volume type. For example, data of emptyDir is lost, but data of PV is not


Currently, Kubernetes supports the following Volume types:

  • EmptyDir: When Pod exists, emptyDir exists. A container collapse does not cause data loss in the emptyDir directory, but when Pod is deleted or migrated, emptyDir is also deleted

  • HostPath: hostPath allows you to mount file systems from Node to Pod

  • Network File System (NFS) : A Network File System. In Kubernetes, NFS can be mounted to a Pod through simple configuration. Data in NFS can be saved permanently, and NFS supports simultaneous write.

  • Glusterfs: A network file system like NFS, Kubernetes can mount GlusterFS into pods for permanent storage

  • Cephfs: A distributed network file system that can be mounted to pods and stored permanently

  • Subpath: This parameter is often used when multiple pods use the same Volume

  • Secret: key management. Sensitive information can be encrypted and saved and mounted to a Pod

  • PersistentVolumeClaim: used to mount PersistentVolume into a Pod

  • .


6 PersistentVolume(PV) Persistent storage volume

PersistentVolume(PV) is a piece of network storage within a cluster. Like Node, it is a cluster resource. PersistentVolume (PV) and PersistentVolumeClaim (PVC) provide convenient persistence volumes: THE PV provides network storage resources, while the PVC requests storage resources and mounts them into pods.


There are three kinds of PV accessModes:

  • ReadWriteOnce(RWO): This is the most basic mode. It can be read and written, but can only be mounted by a single Pod.

  • ReadOnlyMany(ROX): Can be mounted by multiple pods in read-only mode.

  • ReadWriteMany(RWX): This storage can be shared by multiple pods in read-write mode.


Not all storage supports these three storage modes. For example, sharing mode is not supported at present. NFS is the most commonly used storage mode. When PVC is bound to PV, it is usually bound according to two conditions, one is the size of the storage, and the other is the access mode.


PV recycling strategies (persistentVolumeReclaimPolicy) also have three

  • Retain Volume(manually clear Volume)

  • Recycle data, rm -rf /thevolume/* (only NFS and HostPath support)

  • Delete: deletes a storage resource


7 Deployment Stateless application

Instead of manually creating Pod instances, we can manage pods with a higher level of abstraction or definition. Kubernetes uses Deloyment Controller objects for stateless applications. Typical application scenarios include:

  • Define Deployment to create Pod and ReplicaSet

  • Rolling updates and rolling back applications

  • Capacity expansion and reduction

  • Suspend and continue Deployment


Common operation commands are as follows:

Generate a Deployment objectKubectl run WWW - image = 10.0.0.183:5000 / hanker/WWW: 0.0.1 - port = 8080# for Deployment
kubectl get deployment --all-namespaces
Look at a Deployment
kubectl describe deployment www
Edit the Deployment definition
kubectl edit deployment www
Delete a Deployment server
kubectl delete deployment www
Change the number of Pod instances in Deployment
kubectl scale deployment/www --replicas=2
# update mirror
kubectl setImage deployment/nginx - deployment nginx = nginx: 1.9.1# rollback operation
kubectl rollout undo deployment/nginx-deployment
Check the rollback progress
kubectl rollout status deployment/nginx-deployment
# Enable HPA-Horizontal Pod Autoscaling to set the minimum, maximum number of instances and target CPU usage
kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
# pause updating Deployment
kubectl rollout pause deployment/nginx-deployment
Restore update Deployment
kubectl rollout resume deploy nginxCopy the code

Update strategy

‘.spec.strategy ‘refers to the strategy by which a new Pod replaces an old Pod. There are two types

  • ‘RollingUpdate’ can ensure that the application can provide normal services during the upgrade.

  • The ‘Recreate’ strategy kills all existing pods before creating new ones.


Relationship between Deployment and ReplicaSet

  • Use Deployment to create ReplicaSet. ReplicaSet creates pods in the background and checks the startup status to see if it succeeds or fails.

  • When an update operation is performed, a new ReplicaSet is created and the Deployment moves the pod from the old ReplicaSet to the new ReplicaSet at a controlled rate


8 StatefulSet Stateful application

While Deployments and ReplicaSets are designed for stateless services, StatefulSet is designed for stateful services. Its application scenarios include:

  • Stable persistent storage, that is, Pod can still access the same persistent data after rescheduling, based on PVC

  • Stable network flag, i.e. Pod rescheduling with the same PodName and HostName, is implemented based on Headless Service(i.e. Service without Cluster IP)

  • Orderly deployment and orderly expansion, that is, pods are ordered, and operations should be carried out in sequence according to the defined order during deployment or expansion (i.e. from 0 to N-1, all previous PODS must be in Running and Ready state before the next Pod runs), which is implemented based on init containers

  • Ordered shrinkage, ordered deletion (i.e. from n-1 to 0)


Two update policies are supported:

  • OnDelete: When ‘.spec.template ‘is updated, old pods are not deleted immediately, but new pods are created automatically after the user manually deletes them. This is the default update policy, compatible with v1.6 version behavior

  • RollingUpdate: Automatically deletes the old Pod and creates a new Pod replacement when ‘.spec.template ‘is updated. The pods are updated in reverse order, removing them, creating them, and waiting for the Pod to become Ready before updating the next Pod.


9 DaemonSet daemon process set

DaemonSet ensures that a Pod instance runs on specific or all nodes. It is often used to deploy cluster log collection, monitoring, or other system management applications. Typical applications include:

  • Log collection, such as Fluentd, Logstash, etc

  • System monitoring, such as Prometheus Node Exporter and CollectD

  • System applications, such as Kube-proxy, Kube-DNS, Glusterd, Ceph, Ingress-Controller, etc


Specified Node Node

DaemonSet ignores the unschedulable state of nodes, and there are two ways to specify that a Pod runs only on the specified Node:

  • NodeSelector: only nodes that match the specified label are scheduled

  • NodeAffinity: A Node selector with more features, such as support for collection operations

  • PodAffinity: Assigns a desired Pod to a Node


Currently, two strategies are supported

  • OnDelete: The default policy. After a template is updated, a new Pod will be created only after the old one is manually deleted

  • RollingUpdate: Automatically deletes old pods and creates new ones after updating the DaemonSet template


10 Ingress Load balancing

Kubernetes load balancing we use the following two mechanisms:

  • Service: Service is used to provide load balancing within the cluster. Kube-proxy is responsible for balancing the load of Service requests to the Pod at the back end

  • Ingress Controller: Uses the Ingress to provide load balancing outside the cluster


The IP addresses of Service and Pod are only accessible within the cluster. Requests from outside the cluster need to be forwarded to the port exposed by the node where the service is located through load balancing, and then kube-proxy forwards them to the relevant Pod through the edge router. Ingress can provide the SERVICE with urls, load balancing, HTTP routes, etc., for external access to the cluster. To configure these Ingress rules, the cluster administrator needs to deploy an Ingress Controller that listens for Ingress and service changes, configures load balancing and provides access points based on the rules.


Ingress Controller

  • nginx

  • traefik

  • Kong

  • Openresty


11 Job & CronJob tasks and scheduled tasks

Job is responsible for batch processing of short-lived one-off tasks, that is, tasks that are executed only once, and ensures that one or more pods of the batch task successfully end.


A CronJob is a scheduled task. Similar to the Crontab in Linux, it runs a specified task in a specified period of time.


12 Horizontal Pod Autoscaling (HPA

Horizontal Pod Autoscaling can automatically scale Pod numbers (replication Controller, Deployment and Replica set support) based on CPU, memory usage or by applying custom metrics.


  • By default, the control manager queries metrics’ resource usage every 30 seconds (this can be changed with –horizontal-pod-autoscaler-sync-period).

  • Three metrics types are supported

    • Predefined metrics(such as CPU for Pod) are calculated in terms of utilization

    • Custom Pod metrics, measured in raw values

    • Custom Object Metrics

  • Two metrics queries are supported :Heapster and custom REST APIS

  • Support multiple metrics


You can run the following command to create an HPA:

kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10Copy the code



13 Service Account

The Service Account is designed to make it easier for a process inside a Pod to call the Kubernetes API or other external services


authorization

A Service Account provides a convenient authentication mechanism for services, but it does not care about authorization. You can use Role Based Access Control (RBAC) to authenticate Service Accounts and authorize sas by defining roles, RoleBinding, ClusterRole, and ClusterRoleBinding.


14 Secret key

Sercert- Key resolves the configuration of sensitive data such as passwords, tokens, and keys without exposing the sensitive data to a mirror or Pod Spec. Secret can be used as a Volume or as an environment variable. There are three types:

  • Service Account: used to access Kubernetes API by Kubernetes is automatically created, and will automatically be mounted to the Pod/run/secrets/Kubernetes. IO/serviceaccount directory;

  • Opaque: Secret in base64 encoding format, used to store passwords and keys.

  • Kubernetes. IO/dockerconfigjson: used to store the private docker registry authentication information.


15 Configure the ConfigMap configuration center

ConfigMap A key-value pair used to store configuration data. It can be used to store individual properties or configuration files. ConfigMap is similar to Secret, but it makes it easier to handle strings that don’t contain sensitive information. You can use ConfigMap in Pod by setting environment variables, setting container command-line parameters, and mounting files or directories directly from the Volume.

You can use kubectl create ConfigMap to create a configmap from a file, directory, or key-value string. It can also be created using kubectl create -f value.yaml.


16 Resource Quotas

Resource Quotas are a system that Quotas user resources.

The following types of resource quotas are available:

  • Computing resources, including CPU and memory

    • cpu, limits.cpu, requests.cpu

    • memory, limits.memory, requests.memory

  • Storage resources, including the total number of storage resources and the total number of specified storage classes

    • Requests. Storage: Total storage resources, such as 500Gi

    • The number of persistentvolumeclaims: PVC

    • .storageclass.storage.k8s.io/requests.storage

    • .storageclass.storage.k8s.io/persistentvolumeclaims

  • Number of objects, the number of objects that can be created

    • pods, replicationcontrollers, configmaps, secrets

    • resourcequotas, persistentvolumeclaims

    • services, services.loadbalancers, services.nodeports


It works as follows:

  • Resource quotas are applied to namespaces, and each Namespace can have a maximum of one ResourceQuota object

  • If you enable the quota, you must configure the request or limit for computing resources when creating containers. (You can also set the default value with LimitRange.)

  • If the number of users exceeds the limit, new resources cannot be created