What is Kubernetes

Kubernetes is a new leading distributed architecture solution based on container technology. The solution is still new, but it is the culmination of more than a decade of experience in large-scale application of container technology. Rather, Kubernetes is an open-source version of Borg, Google’s closely guarded secret weapon for more than a decade.

Kubernetes is a complete distributed system support platform. Kubernetes have complete cluster management ability, including multi-level security protection and access mechanism, multi-tenant application supporting ability, transparent service registration and service discovery mechanisms, the built-in intelligent load balancer, powerful fault detection and self-healing, rolling upgrade service and online expansion ability, extensible resource scheduling mechanism automatically, And multi-granularity resource quota management capability.

Second, Kubernetes architecture

1. Architecture diagram

2. Node description

  • The master node

The Master is the brain of the Kubernetes Cluster and is used to manage and schedule resources

  • The work node

Kubernetes supports container Runtime such as Docker and RKT to provide resources. The resource unit provided by the Work node is Pod. Meanwhile, the master node can also be the Work node

3. Master node components

  • etcd

Etcd is responsible for saving the configuration information of Kubernetes Cluster and the status information of various resources. Etcd quickly notifies Kubernetes components when data changes.

  • API server

API Server provides HTTP/HTTPS RESTful apis, that is, Kubernetes API. API Server is the front-end interface of Kubernetes Cluster. Various client tools (CLI or UI) and other components of Kubernetes can manage various resources of the Cluster through it.

  • Scheduler

The Scheduler is responsible for deciding which Node to run the Pod on. When scheduling, Scheduler fully considers the Cluster topology, the load of each node, and the requirements of applications for high availability, performance, and data affinity.

  • Controller Manager

The Controller Manager manages Cluster resources to ensure that they are in the expected state. The Controller Manager consists of a variety of controllers, Including replication Controller, EndPoints Controller, Namespace Controller, ServiceAccounts Controller, and so on.

Different controllers manage different resources. For example, the Replication Controller manages the lifecycle of Deployment, StatefulSet, and DaemonSet, and the Namespace Controller manages namespace resources.

4. Work node components

  • kubelet

Kubelet is the agent of Node. When Scheduler determines to run Pod on a Node, it will send the specific configuration information of Pod (image, volume, etc.) to Kubelet of the Node. Kubelet creates and runs containers based on the information. And reports the running status to the Master.

  • Container Runtime

Kubelet does not manage Container resources directly, but delegates management to the Container Runtime. It is responsible for starting and closing containers. If there is no local corresponding image, it will go to the designated Docker warehouse to pull the corresponding image.

  • kube-proxy

A service logically represents multiple pods on the back end that are accessed by the outside world. How are requests received by a service forwarded to a Pod? This is what Kube-Proxy does.

Each Node runs the Kube-Proxy service, which flows TCP/UPD data accessing the Service to the container at the back end. Kube-proxy implements load balancing if there are multiple replicas.

5. Release process samples

1) Kubectl calls THE Api server to create a ReplicaSet. The Api server stores the ReplicaSet in etCD. 2) The Controller Manager listens for ReplicaSet creation or modification, and receives a notification for the ReplicaSet created in the previous step. 3) The Controller Manager will compare the existing Pods to the expected pods, if not, it will create the expected Pods resource in Api Server. 4) The Scheduler component listens to the need to create new Pods, runs the scheduling algorithm to select idle Work nodes, updates the definition of POD through Api Server, and assigns pods to specific work nodes to be released. 5) Api Server notifies kubelet on corresponding WORK node. Kubelet assigns the Container Runtime to create and run the pod Container. 7) The Container Runtime starts downloading the image and running the Container. Kubelet also starts monitoring the Container.

Iii. Overall architecture of K8S

Four,

This article mainly describes the overall architecture of K8S and the role of each component on the Master&Work node. As shown in the following figure

The resources

www.bilibili.com/video/BV1Ja… Weread.qq.com/web/reader/…