Song Xiao, Cloud Computing Architect of Puyuan
Having won the container choreographer wars, Kubernetes is likely to become the standard layer on top of the "cloudy" cloud, radically changing the way distributed systems are distributed and run, while slowly becoming a less compelling underpinner like the Linux Kernel.
Why did K8S appear?
The source of the Kubernetes open source architecture is Borg, an internal cluster management system that runs in Google's production system. Google opened the source of its more than 10 years' experience in Borg in 2014, which started the craze of Kubernetes. In the same year, Microsoft, RedHat, IBM, Docker, CoreOS, Mesosphere, and Saltstack joined the Kubernetes ecosystem. The following year, VMware, HP, Intel and other companies also joined. In 2015, with the release of Kubernetes 1.0, container orchestration management began to take off. Kubernetes went from being an average player (Docker was the judge of container technology at the time, while Docker Swarm, Apache Mesos, and Google Kubernetes were Docker's container choreographer, respectively) to a leader in the container choreographer competition. Then upgrade to the referee (Kubernetes led the container choreographer, and Docker and CoreOS Rkt container technology into the container isolation technology competition), so as to ride the dust in the container class.
Next we focus on K8S, hope to give reference! If you feel good, don't be stingy with your thumbs up!!
What is K8S?
Kubernetes (commonly referred to as K8s, which is an abbreviation of "ubernete" replaced by "8") is an open source system for automatically deploying, extending, and managing containerized applications. Designed and donated by Google to the Cloud Native Computing Foundation (now part of the Linux Foundation) for use.
It is designed to provide "a platform for automated deployment, scaling, and running application containers across host clusters." It supports a range of container tools, including Docker. In 2017, CNCF announced the first Kubernetes Certified Service providers (KCSPs), including IBM, MIRANTIS, Huawei, inwinSTACK Technologies, and others.
What can THE K8S do?
Kubernetes can schedule and run containers on either a physical cluster or a virtual machine cluster, but it can do much more.
To take full advantage of containers and move away from traditional application deployment, containers need to be deployed and run independently of the infrastructure.
However, when a particular container is no longer bound to a particular host, the host-centric infrastructure is no longer suitable: load balancing, automatic scaling, etc., hence the need for a container-centric architecture, which Kubernetes provides.
Kubernetes meets some common requirements for applications in production environments, such as:
- Co-located helper processes that take advantage of complex application deployments while maintaining a single-container, single-application model
- Mount the storage system
- Distributed encryption management
- Apply health checks
- Application instance copy
- Automatic horizontal extension
- Naming and discovery
- Load balancing
- Rolling upgrade
- Resource monitoring
- Log acquisition and injection
- Support for introspection and debugging as well
- Authentication and Authorization
These capabilities provide the simplicity of platform as a Service (PaaS) and the flexibility of Infrastructure as a service (IaaS), improving portability across infrastructure.
In terms of cluster management, K8s divides the machines in the cluster into a Master Node and a group of working nodes. The Master node runs a set of processes related to cluster management: Kube-apiserver, Kube-controller-Manager, and Kube-Scheduler. These processes automate the resource management, Pod scheduling, elastic scaling, security control, system monitoring and error correction of the whole cluster.
Pod is the most important and basic concept of K8s! Each Pod has a special Pause container called the "root container," which is independent of the incoming business and does not die easily. And its state represents the state of the entire container group! The Pause container image is part of the K8s platform, and in addition to the Pause container, each Pod contains one or more user business containers. There are actually two types of pods: normal pods and static Pods. Static Pods are not stored in Kubemetes' eTED store, but are stored in a specific file on a specific Node and only run on that Node. A normal Pod, once created, is stored in an ETCD and then dispatched to a specific Node by KubernetesMaster for Binding. The Pod is then instantiated by the kubelet process on the corresponding Node into a set of related Docker containers and started. By default, when a Pod container stops, Kubemetes automatically detects the problem and restarts the Pod (restarting all Podel containers), and reschedules all pods from that Node to another Node if the Node where the Pod resides is dead. The Pod (green box above) is arranged on the node and contains a set of containers and volumes. Containers in the same Pod share the same network namespace and can communicate with each other using localhost.
Endpoint (Pod IP + ContainerPort) Pod IP: Multiple containers in a Pod share the Pod IP address. K8s requires that the underlying network support TCP/IP direct communication between any two PODS in the cluster, which is realized by Flannel and OpenvSwitch. In Vmware, a similar Layer 2 switching technology is VSwitch, of course, now the entire data center network layer 2 gradually from VSwitch - OpenvSwitch
Lable is similar to the tag in Docker. One is to attach "special" images, containers, volume groups, and other resources, and the other is to attach to various resource objects such as Node, Pod, Server, and RC. The difference is that Lable is a key-value pair! Lable is similar to a Tag and can be combined with K8s proprietary Label selectors.
Replication Controller, RC for short, what does she do? It is through her to realize the automatic control of the number of Pod copies! RC ensures that a specified number of "copies" of Pod are running at any given time.
If RC is created for a Pod and three copies are specified, it creates three Pods and continuously monitors them. If a Pod does not respond, the Replication Controller replaces it, leaving the total at 3. If the previously unresponsive Pod is recovered and there are now four pods, the Replication Controller will terminate one of them and keep the total to three. If you change the total number of copies to 5 at run time, the Replication Controller immediately starts two new Pods, ensuring that the total number is 5. You can also shrink the Pod in this way, which is useful when performing rolling upgrades. Note: Deleting an RC will not affect the Pod created by the RC. Logically Pod copy and RC are decoupled! When creating an RC, you need to specify a Pod template (the template used to create a Pod copy) and a Label (the Pod Label that the RC needs to monitor). Deployment is derived from Replication Controller and is 90% similar to RC in order to better solve Pod choreography. Let's not discuss it for the moment. Horizontal Pod Autoscaler (HPA), Horizontal Pod Autoscaler. Like RC, Deployment, it is a resource object of K8s. Her implementation principle is to determine whether to adjust the number of copies of target Pod by tracking and analyzing the load changes of all target Pod controlled by RC.
A "micro service" in the micro service architecture, she is the real bride, and the previous Pod, RC and other resource objects are actually wedding clothes.
Each Pod is assigned a separate IP address, and each Pod provides a separate Endpoint (Pod lP + ContainerPort) for clients to access. Multiple Pod replicas now form a cluster to serve services. Generally, deploy a load balancer (software or hardware), open an external service port such as port 8000 for the Pod group, and add the Endpoint list of these PODS to the forwarding list of port 8000. Then clients can access the service through the external IP address of the load balancer + service port. The load balancer's algorithm determines which Pod the client's request will be forwarded to.
K8s server defines an entry address for a service. The front end (Pod) accesses a set of cluster instances of Pod replicas behind it through the entry address. The service and the back-end Pod replica cluster are "seamlessly connected" by Label Selector.
Server can be abstracted as a special flat bidirectional pipe, and Service ensures that the front container points reliably to the corresponding back container with Label Selector. The role of RC ensures that the Service capability and quality of Service are always at the expected standard.
Kubemetes also follows this routine. The Kube-proxy process running on each Node is a smart software load balancer that forwards requests for services to a Pod instance in the back end and implements load balancing and session retention internally. But Kubernetes invented a clever and far-reaching design: Instead of sharing a single IP address with a load balancer, each Service is assigned a globally unique virtual IP address, called ClusterIP. In this way, each service becomes a "communication node" with a unique IP address, and service invocation becomes a basic TCP network communication problem.
The Pod's Endpoint address changes as the Pod is destroyed and recreated, because the IP address of the new Pod is different from that of the old Pod. Once a Service is created, Kubemetes automatically assigns it an available ClusterIP, and its ClusterIP does not change throughout the life of the Service. As a result, this tricky problem of Service discovery is easily solved in Kubemetes' architecture: a DNS domain Name mapping between the Name of the Service and the Cluster IP address of the Service is a perfect solution.
K8S Architecture components
Master node and Node work node
(1) the master component
The Master is a cluster controller node. Each K8s cluster needs to have one Ms node responsible for the management and control of the entire cluster. Kubernetes Master provides a unique view of the cluster and has a set of components.
- Kubernetes API Server (Kube - Apiserver) The key service process that provides THE HTTP Rest interface is the only entry for adding, deleting, modifying and checking all resources in K8s, and also the entry process for cluster control. API Server provides Rest endpoints that you can use to interact with the cluster.
- Kubernetes Controller Master (Kube-controller-Manager) Automatic control center for all resource objects in K8s.
- Kubernetes Scheduler (Kube-Scheduler), the dispatch room between the royal horses! The process responsible for resource scheduling (Pod scheduling). Create and copy a Replication Controller for Pod.
(2) the node component
The node (orange box in the figure above) is a physical or virtual machine that acts as a Kubernetes worker, usually called a Minion. Each node runs the following Kubernetes key components.
(1) Kubelet: Collaborates with the Master node and acts as the agent of the Master node, responsible for creating, starting, stopping and other tasks related to Pod containers. By default, Kubelet registers itself with the Master. Kubelet periodically reports information about nodes that join the cluster to the host. (2) Kube-proxy: Kubernetes Service uses this proxy to route links to Pods and use it as an external load balancer to balance traffic between a certain number of Pods. For example, it is useful for load balancing Web traffic. (3) Docker or Rocket: Kubernetes uses container technology to create containers.
Why study K8S?
With Kubernetes, you can quickly and efficiently respond to the following customer requests:
- Dynamic and precise deployment of applications
- Dynamic extension of the application
- Roll out new features seamlessly
- Optimize hardware resources on demand
Kubernetes is: portable: public cloud, private cloud, hybrid cloud, multi-cloud Extensible: Modular, plug and play, hook, composite Auto-fix: The Kubernetes project was launched by Google in 2014. Kubernetes was built on 15 years of experience in Google's production environment, combining some of the best ideas and practices from the community.
I believe the above information also let you understand the importance of K8S, K8S is used in many large enterprises, so it is necessary for us to learn K8S technology.
I also found a lot of information before, but things are relatively scattered, I still need to clean up by myself. There are also some interview questions to good accumulation.