This is the 12th day of my participation in the August Challenge

Kubernetes Architecture Fundamentals — Change in the Cloud era

Container technology is lighter, more elegant than virtualization, and better suited to building and deploying applications in the microservices era. Container based Kubernetes is not an isolated cloud platform, it has ambitions to be called a cloud computing specification. At the same time, it has become the de facto standard for cloud computing.

Kubernetes is referred to as K8s. Therefore, K8s is used for discussion in the following introduction.

The advantages of K8s

  • K8s can be seen as a framework for the control plane of cloud computing, and the three elements of cloud computing — computing, networking and storage — are integrated with Kubernetes in the form of plug-ins.
  • On the basis of container technology, K8s also makes a more sophisticated model abstraction. With a standardized model as a unified API, K8s breaks down the boundaries between cluster managers and application developers. It uses a unified language to associate different roles and realizes the negotiation between platform layer and application layer through specific semantics.
  • K8s is very scalable, based on a custom model, forming a rich ecosystem.

This paper will mainly introduce the K8s architecture foundation, including the advantages of container technology, the principle of K8s object design, the cooperative working principle of K8s control plane components and control template components.

preface

Container technology has undergone several generations of technological innovations, and its core goal has never changed: how to restrict application processes to a separate operating environment to meet the needs of encapsulation and isolation.

The emergence of Docker has completely changed the pattern of container technology and even the cloud computing industry. It introduces the concept of container image, packaging, storage and distribution of applications, all dependencies of applications and even operating systems, completely solves the way of software delivery and simplifies the complexity of application deployment.

Based on Docker technology, Google has launched K8s, which focuses on cluster management, container choreography, and service discovery. K8s abstracts computing, network, storage at the infrastructure level, as well as applications and services running on the technical architecture, controls all elements of cloud computing design through a unified model, and sets out a set of independent and composable control processes to reduce the workload of developers.

Change in the cloud age

Application deployment has gone through three eras: physical machine era, virtualization era and container era. The corresponding application deployment is shown in the figure below:

Physical machine age

In the early days, software architecture was mostly single unit architecture, in which all functions of a system ran in a single process and needed to be supported by powerful computing machines. It has many shortcomings:

  • The program running dependency package needs to be installed on the host in advance. In the test environment deployment, the production environment will have a lack of dependency, resulting in failures and rollbacks.
  • When multiple applications co-exist on the same host, different applications share the same runtime environment, which makes it possible for applications to interact.
  • Different applications are symbiotic on the same compute node as processes, not isolated from each other.
  • In this architecture and deployment mode, the node where the application is deployed is often fixed, which makes it difficult to automate operation and maintenance.
  • Resource management is based on compute nodes. Critical applications require standby nodes. Therefore, you need to purchase additional devices.

Age of Virtualization

The speed of hardware change has been unable to meet the development trend of software, so virtualization technology arises at the right moment, which has higher requirements for the isolation of computing resources. The essence of a VIRTUAL machine (VM) is to simulate multiple complete operating systems on a physical machine. Each operating system instance manages its own independent file system and device driver, and allocates specific computing resources such as CPU, Memory, and disk.

With virtualization, a physical node can be split into smaller logical nodes. Then comes how to manage such complex application instances. Cloud computing is a technology produced to solve the management of massive applications in large-scale server clusters. It can be said that cloud computing is a companion technology of virtualization.

Cloud computing is to tens of thousands of computing nodes of a cluster, unified control, and the computation, storage and network computing system resources such as abstract, makes the cloud users do not need to care about infrastructure, only need to define your needs computing resources, cloud platform can automatically select the most appropriate computing resources, and assigned to users in order to meet the needs of the business.

However, the core of virtual cloud platform management is virtual machine, which is an operating system. Application deployment on the operating system is not omitted. Therefore, cloud computing has a new management scope, as shown in the figure:

  • Infrastructure as a Service (IaaS)

Down to the infrastructure level, infrastructure as a service abstracts computing, network, and storage resources and provides access to and monitors access to these basic resources. After an IaaS user sends a request to the cloud platform, the cloud platform only provides basic resources for the user, such as a VM. The user is still responsible for how to use the VM. That is, when an IaaS user needs to deploy an application after creating a VM, the automatic application deployment is still responsible.

  • Platform as a Service (PaaS)

Based on the IaaS, the cloud platform allocates storage resources and builds an application access network based on the target environment of application deployment. Application access network This section describes how to access applications, including configuring load balancing and domain name services.

In addition, for each instance, in addition to installing the operating system, it also provides some auxiliary software for application deployment and operation, such as middleware for Tomcat and Node.js, application startup configuration scripts and application distribution agents.

PaaS is application-oriented. Once an application instance is created using PaaS, the network topology has been set up, the middleware and file distribution system have been built in the operating system, and users only need to deploy code to access the application.

  • Software as a Service (SaaS)

In this mode, the software has been deployed and cloud users are software users. But choosing SaaS means giving up personalization and using only standard services from software vendors.

Containerization age

Virtualization does not solve the problem of environment dependency for applications, does not solve the problem of application distribution, and makes application deployment more complex, so container technology comes into being.

Container technology relies on existing mature technologies to create a completely isolated operating environment for applications, guarantee quality of service based on pre-allocated resources, and achieve incremental distribution based on layered file systems and mirror repositories.

Significant advantages of containerization over virtual machine technology:

  • The container runs on a process-based basis rather than a virtual machine, without emulating an operating system. Its specific is fast startup speed, less resource occupation, is conducive to all computing resources to the application, reduce the cost of hardware.

  • Containers isolate processes based on Linux Namespace technology, which allows user processes to have independent network configuration, file system, user space, process space, and so on.

  • The container is based on the Linux Control Group technology to limit the resources of user processes. It can allocate CPU, Memory, and disk I/O resources to each container instance and isolate the interference between multiple user processes on the same host.

  • Like virtual machines, containers have container images, and Docker also supports Dockerfile, allowing users to manage container image source files like source code. Container images are application-oriented rather than operating system oriented.

  • Containers support hierarchical file structures. When building container images, Docker will define each line of commands defined in Dockerfile into a file hierarchy. A Docker image is a collection of multiple file layers. When the container is running, it will be loaded by layer from bottom to top according to the image hierarchy, which is the inverse operation of image packaging, so as to realize the purpose of packing once and running everywhere.

  • Each file layer has a Digest calculated based on its content, and if a file layer has not changed during file distribution, it does not need to be pulled again, thus solving the problem of incremental file deployment. No matter how large the base image is, if the base image is not updated, only the changed part is pulled when the image version is updated, which does not consume too much bandwidth.

  • Container images can be uploaded to the image repository, pulled and run from the image repository on any other compute node, and images can be versioned using different tags.

What role does K8s play in the containerization era?

  1. Cluster management

The K8s is centered on computing nodes, which form a cluster that can communicate with each other. The K8s can monitor and manage the health status and available resources of these nodes.

  1. Job scheduling and job management
  • Supports multiple storage modes: Allows containers to mount multiple storage types, such as local storage and network storage provided by public clouds.
  • Automatic controlled upgrade and rollback: When a new container image is released, it is possible to create a new container with the new container image in a certain strategy, while removing the existing old container, and finally bringing the container to the desired state.
  • Scheduling mechanism with high utilization: Based on the CPU and Memory requirements applied by containers, the system finds the most suitable node in the cluster to run containers. In this way, the load of individual nodes is not too high or too low, and the resources of cluster nodes are fully utilized.
  • Effective self-healing mechanism: If the container exits or the service is unhealthy, the container can be deleted and rebuilt, and removed from the service endpoint until the new container is ready to serve.
  • Password and configuration management: Stores and manages sensitive information, such as passwords, as well as configuration information of applications. This information can be updated at any time without recompiling the container image.
  1. Service discovery and service governance

Containers can use DNS and cluster IP addresses to provide services both inside and outside the cluster. Load balancing can be achieved during traffic distribution to avoid excessive traffic of a container.

In fact, the above functions can be solved in many cloud computing platforms, so what is the core competitiveness of K8s?

  1. Declarative system

Declarative systems correspond to imperative systems.

Declarative systems strive for ultimate consistency, guaranteed by the system to always try and keep the actual state consistent, so the whole system is based on asynchronous invocation. The core advantage of K8s is that it abstracts all objects in the problem solving domain very well, such as computing nodes into Nodes, entities running applications into PODS, and accessible application services into Services.

The abstractions of these objects are oriented to different user scenarios, such as platform-oriented Node, application-oriented Deployment, security-oriented NetworkPolicy, and flow-management-oriented Service. K8s removes the boundaries between different types of cloud platforms in traditional cloud computing, integrating everything from cloud platform infrastructure to service access to application operations and maintenance into one unified platform.

  1. Controller mode

Controller mode is the key to the operation of K8s system. Every abstract object in K8s has its corresponding controller component. Each controller listens for changes in the object it cares about, then configures the system according to the latest expected state of the object, and updates the actual state of the object when the configuration is complete. Working together, these controllers are responsible for keeping the applications running across the cluster consistent with the user’s expectations.

  1. Plug-in framework

K8s provides a plug-in framework. For example, the startup of Pod requires creating container instances, mounting storage, and configuring the network. The network environment may be different for the underlying storage in different scenarios of different users. Therefore, K8s provides container operation interface, container storage interface, container network interface, so that different enterprises can customize solutions as required.

  1. Standardization push

In addition to defining core objects for cloud computing management, K8s also provides the ability to extend objects based on custom resources. The endorsement of the big factory and the active community drive make it very likely that all kinds of technical solutions will become the standard throughout the industry in the future.


We will continue to update the K8s series.

If it is helpful to you, please kindly like yo! Your encouragement is my greatest support.