Cloud computing through more than ten years of development, from the beginning to discuss what is cloud computing, cloud computing to debate whether new wine in old bottles, and then to discuss how to build the cloud base ability, to the application of how to build cloud platform as the industry continuously explore of cloud computing technology, our understanding of cloud computing and expectations in the growing. Currently, most enterprises have realized the competitive advantages of cloud computing and have built their own private cloud infrastructure or moved their data centers to the public cloud. How to make the best use of cloud computing infrastructure is the most important issue in the current cloud computing technology. Applications based on cloud computing platforms are called cloud native applications in the industry.

one Definition of cloud native

Cloud Native is a way to build and run applications, a set of technical systems and methodologies. Cloud Native is a compound word, Cloud+Native. Cloud is a Cloud platform. Native applications take the Cloud environment into account at the beginning of design. They are designed for the Cloud and run in the best position on the Cloud, making full use of the flexibility and distributed advantages of the Cloud platform.

1. History of the native cloud

In 2013, Matt Stine of Pivotal first proposed the concept of CloudNative to distinguish between applications designed for the cloud and traditional applications on top of it.

In 2015, Matt Stine, in his book Migrating to Cloud Native Architecture, defined several characteristics that fit the native cloud architecture: 12 factors, microservices, self-agile architecture, API-based collaboration, and anti-vulnerability;

In 2015, the Cloud Native Computing Foundation (CNCF) was established. As a vendor-neutral foundation,CNCF is committed to the promotion and popularization of cloud native applications.

In 2017, Matt Stine summarized the native cloud architecture as modular, observable, deployable, testable, replaceable, and processable. Pivotal’s latest website Outlines cloud native in four key points: DevOps+ continuous delivery + microservices + containers.

2. CNCF’s definition of cloud native

CNCF(Cloud Native Computing Foundation) was founded in July 2015 and belongs to the Linux Foundation. Its original intention is to focus on “Cloud Native” service Cloud Computing. CNCF is a vendor-neutral foundation dedicated to promoting fast-growing open source technologies on Github, such as Kubernetes, Prometheus, Envoy, and others, to help developers build great products faster and better.

The establishment of the Native Computing Foundation (CNCF) is a milestone in cloud computing, marking the transformation of the focus of cloud computing from infrastructure construction to cloud architecture of applications. CNCF’s definition of cloud native is a process of continuous optimization. At present, CNCF defines the native cloud as:

“Cloud native technologies enable organizations to build and run applications that scale flexibly in new and dynamic environments such as public, private and hybrid clouds. Cloud native technologies include containers, service grids, microservices, immutable infrastructure, and declarative apis.

These technologies enable the construction of loosely-coupled systems that are fault-tolerant, easy to manage, and easy to observe. Combined with reliable automation, cloud native technology enables engineers to easily make frequent and predictable significant changes to systems. “

In the first half of CNCF’s description of cloud native, it gives the definition of cloud native and the best technical practice of cloud native at present. The second half indicates the goal of building cloud-native applications.

CNCF also provides the relevant technology stack for the construction of cloud native, as well as the incubation project information related to the foundation.

▲CNCF Cloud Native Landscape

3. Key technologies of cloud native

CNCF defines the key technologies of cloud native, including containers, service grids, microservices, immutable infrastructure and declarative apis, as the best practices of cloud native applications.

▲ Yunyuan Technology Stack

– container

Container technology is a lightweight virtualization technology. The resource usage (including CPU, memory, disk I/O, and network) of each process is isolated by the operating system kernel. In this way, processes running in the container are isolated from other processes to a certain extent, and excessive extra consumption of Virtual machines (VMS) is avoided.

Containers typically work with container choreography systems, which provide the deployment and organization capabilities of containers. Container orchestration system can manage a large number of machines (physical machines or virtual machines) as a cluster in a unified manner, and deploy containers to the machines in this cluster by setting policies. Implement container multi-instance deployment and automatic configuration of application routing; Monitor infrastructure and containers.

Container and container choreography technology is of great significance to cloud native applications. Container provides a lightweight platform for cloud native applications. First, compared with traditional virtualization technology, container is extremely lightweight. Second deployment can be achieved; Container applications are also portable, built once, and deployed anywhere. While container choreography technology can deploy containers to a large cluster, it can also provide elastic scaling and failover capabilities for applications, thus realizing high availability of applications on containers. Improve application deployment automation and rapid deployment capabilities.

Linux Container (LXC) and runC are common Container technologies. RunC is currently the most widely recognized container implementation based on creating and running containers according to the OCI standard. OCI(Open Container Initiative) aims to develop an Open industrial standard around Container formats and runtime.

The most popular container choreography implementation is Kubernetes, an open source system for automatically deploying, extending, and managing containerized applications. It groups the containers that make up the application into logical units for easy administration and service discovery. Kubernetes is a product of Google’s 15 years of experience in operating and maintaining production environments, as well as the best ideas and practices of the community. The current commercial and open source container platform is basically based on Kubernetes.

– Immutable infrastructure

Traditional O&M infrastructure usually applies for one or a group of servers. O&m personnel install binary software packages on the servers and configure the environment using SSH or Agent. If you need to make changes such as version upgrades and parameter changes, you need to adjust the configuration files on a server-by-server basis and deploy the new code directly to an existing server. These servers host applications and parameters that can be changed, so it’s a variable infrastructure. Also known as “Snowflake Server,” servers are like snowflakes, each one unique and different.

Immutable infrastructure is different from traditional o&M in that servers are never modified after deployment. If you need to update in any way, such as a version upgrade or parameter configuration, you need to build a new server to replace the old one. In immutable infrastructures, servers are typically built as images, with each change corresponding to an Image. Immutable infrastructure is also known as a “Phoenix Server,” and a Server should be able to rise from the ashes like a Phoenix.

The benefits of immutable infrastructure include greater consistency and reliability across the infrastructure, as well as a simpler, more predictable deployment process. It can reduce or completely eliminate common problems in variable infrastructures, such as configuration drift, cluster configuration consistency, and environment replication issues.

One way to implement immutable infrastructure is known as Docker. Docker is usually known as container technology, but in fact Docker provides a container packaging technology. The core concept of Docker is immutable infrastructure. Docker delivers software through Docker Images or Dockerfiles. Each new release is a reconstruction of the entire runtime environment, and each update is a new version of the Image. Docker can achieve the highest benefits by leveraging the lightweight deployment of containers.

Micro service –

As the requirements continue to increase, there are many problems that can occur in a single application, such as the need to redeploy the entire application for every small change, and the possibility that a code defect in a small module can make all services unavailable. Microservices are an architectural pattern that addresses these issues by making business applications consist of small independent services that communicate through well-defined apis. These services are managed by small independent teams. Microservices architectures make applications easier to scale and faster to develop, accelerating innovation and shortening time-to-market for new features.

The way microservices break down applications into small, independently deployed services is a natural fit with containers. Applications on the cloud require failover, elastic scaling, and fast start and stop, which are also design requirements for microservices applications. It can be said that the development of container and container choreography technology has greatly promoted the development of micro services. In turn, the development of micro-service applications has promoted the spread of container technology.

Because microservice is a distributed system, the complexity of distributed system design. In order to solve the complexity of microservice system design, various microservice governance frameworks emerge one after another. Popular examples include Spring Cloud, Dubbo and Istio.

Spring Cloud is a microservices governance bucket based on microservices excellence open source projects. There are different solutions and open source components to choose from. A relatively complete solution is Spirng Cloud Neflix. Spring Cloud is the most widely used microservice governance framework in the world. You can use the complete ecosystem of existing Spring to write essays seamlessly with SpringBoot.

Dubbo is an open source service governance project provided by Alibaba in China, which is also integrated with Spring. Many Internet companies in China choose Dubbo as their microservice framework.

Istio is an open source service grid project, which we’ll cover in the next chapter.

– Service Grid

As mentioned earlier, Docker and Kubernetes have solved the problem of application deployment, scheduling, and updating. However, as a distributed system, microservice applications need to deal with many problems during runtime, such as service discovery, fault fuses and load balancing. To solve these problems, the industry has gradually developed a microservice governance framework. Early microservices governance was based on development frameworks such as Spring Cloud and Dubbo. These development frameworks solve the problems of microservice runtime well, but they have some disadvantages such as locking development language, invasion of application, unclear development operation and maintenance responsibilities, etc. Service Grid (ServiceMesh) emerged in this environment.

A hot concept of late is the service grid, which is a software infrastructure layer for controlling and monitoring internal service-to-service traffic in microservice applications. It typically takes the form of a “data plane” for network agents and a “control plane” for interacting with those agents deployed with application code. In this model, the service grid is transparent to business developers, and platform operators can effectively operate and maintain applications without caring about the business, ensuring the reliability, security, and visibility of applications. The service grid is also minimally intrusive to the business application development process and friendly to all languages.

The main open source project for service Grid is Istio. Istio is based on a complete solution provided by the Kubernetes environment to meet the various needs of microservices applications. With Kubernetes’ Pod, Istio injects a Sidecar, Proxy for all external traffic of a business instance, for each microservice instance, enabling behavioral insight and operational control capabilities such as service registry discovery, configuration management, fuses, and link tracing required by a microservice governance framework. It also provides flexible grayscale publishing strategy configuration.

– Declarative API

The opposite of declarative is imperative apis. Imperative API is to give each operation step, the target system only needs to follow the steps to execute, the target system returns the result to the caller, the caller to process the result; Declarative apis give a final state, and the target system operates on the resource to meet the requirements without the caller having to intervene.

The advantage of declarative apis is that they make delivery between distributed systems easy. We don’t need to care about any process details. Declarative approach can greatly reduce the workload of users and greatly increase the efficiency of development, because declarative approach can simplify the required code and reduce the work of developers. If we use the declarative approach to development, although it is more flexible in configuration, it brings more work.

One of the best examples of declarative apis is Kubernetes. The YML files used to manipulate K8s are all declarative. There are also open source projects with declarative apis for deployment, such as Terraform.

two Cloud native trends

1. Operation and maintenance continue to sink, service grid will become the mainstream, and Serverless will be gradually promoted

One development direction of cloud computing is operation and maintenance sinking. Management functions and operation and maintenance work unrelated to the business are sunk into the infrastructure as far as possible, and applications can focus on the development and operation of business capabilities. The evolution of this trend affects the development direction of cloud computing. From virtualization at the beginning, to IaaS and PaaS, part of the o&M responsibilities of application systems are transferred to the platform o&M process.

▲ Downward shift of operation and maintenance functions

PaaS offers cloud applications running container, solved the problem of the application deployment and runtime management problems, but still have a lot of operational work, especially for micro service applications, many of the problems need to be solved, such as service release and perception, the application of multiple instances load balancing, service failure detection and isolation, has applied gray release strategy, etc. These are not solved at the PaaS level and are usually solved by the development framework, the microservices governance framework we mentioned earlier.

Because the value of a business development team is reflected in the provision of business functions, the business development team should focus on the realization of business functions, and the non-functional requirements should be dealt with by the platform. Based on this demand, the service grid emerged. The problem of microservice governance can be unified operation and maintenance management of the service grid, and business applications only need to focus on the realization of business capabilities.

After the emergence of service grid, the life cycle of business applications still needs applications to ensure operation and maintenance. This has evolved into the concept of Serverless, which is not that there is no Server, but that the development team doesn’t care what the Server is. The development team only needs to submit the business code and get the required running instance. For the application development team, the Server does not exist.

Judging from the current technology trends in the industry, the concept of ServiceMesh has been accepted by most large cloud enterprises, and the performance problems of ServiceMesh are being gradually solved. It can be predicted that more microservice applications will adopt this basic capability this year. Serverless is still in its early stage of development, including fully managed services and FaaS (function as a Service). Fully managed services have gradually matured in public cloud. With the popularity of hybrid cloud, fully managed services will gradually develop. FaaS will take some time to replace existing development models because it involves a shift in development models, but there are some suitable application scenarios that should be used more and more.

2, the combination of soft and hard, to solve the problem of virtualization performance

With the development of cloud computing, more and more virtualization technologies are being used, from computing virtualization to storage virtualization to network virtualization. Virtualization technology brings many benefits. Virtualization is the foundation of infrastructure servitization. Through virtualization, infrastructure can be realized as code, which greatly improves resource manageability and automation. But virtualization brings with it another problem, the loss of performance and the interaction between software processes.

As a result, more resources are required than actual services, increasing the cost of server resources. The interaction between processes may affect the overall performance of the cloud platform. In both network virtualization and storage virtualization, software processes are required to process network traffic and I/OS. In order to achieve distributed high availability and reduce packet forwarding, basic SDN and SDS processes are usually deployed in the same cluster as application processes. As a result, some services of SDN and SDS management process may fail to timely process network and IO requests due to excessive CPU or memory usage for various reasons, resulting in the overall performance decline of the cloud platform.

In order to solve these two problems, the combination a solution is the hard and soft, the cloud platform management process, such as scheduling management, network virtual switches, the storage of virtual storage gateway spun off from the operating system process, make the process running on a dedicated server interface card, the card by designer, contain a customized chips (FPGA), It can be programmed to keep the advantages of virtualization while isolating the management process and service process from each other. At the same time, due to the custom chip (such as FPGA) to process, the performance will be greatly improved, greatly reducing the loss of virtualization.

At present, major public cloud vendors have related products in their own public cloud applications.

3. Container VMS are further converged

The advantages and disadvantages of containers and virtual machines have been debated since the birth of container technology. Container lightweight, good packaging ability and easy to deploy characteristics, especially after the emergence of Kubernetes, has a momentum to replace virtual machines. However, when dealing with heavy applications (such as relational databases, big data, etc.), container technology seems to be inadequate. In addition, container technology can not reach the level of virtual machine in resource isolation and security, so in many scenarios, it is still the use of virtual machine.

In this case, how to realize the fusion of container technology and virtualization technology, give full play to the advantages of both become a development topic of cloud computing. There are three main technologies at present, one is the mixing of container virtual machines; One is a lightweight virtual machine. Finally, safety containers.

Container Vm blending. You can modify the choreography engine of a container or vm to support the deployment of containers and VMS through a set of apis. At the same time, the virtualization layer and container can communicate with each other more efficiently. This is what some traditional virtualization vendors are doing now. One of the more mature implementations is Redhad’s Kubevirt.

Lightweight VM. Lightweight VM solutions are proposed to solve the problem of large VM images, slow startup, and high resource consumption. Lightweight virtual machines use a thin proprietary library operating system (LibraryOS) that can be compiled in high-level languages and run directly on top of commercial cloud platform virtual machine hypervisors. They have many advantages over container technology, not least virtual machine-like isolation, but also faster startup times and smaller attack surfaces. Lightweight VMS are inferior to other solutions in terms of language support and compatibility because they use their own operating systems. There are many lightweight virtual machine technologies, such as Unikernel, Drawbridge, MirageOS and HaLVM.

Safe containers, or sandbox containers. To address container isolation weaknesses, the security container provides a layer of Sandbox for container operation. The application running in the container has its own kernel and virtual appliance, separate from the host and other sandboxes. The advantage of the secure container is that it can be compatible with the current container image, without the need to make large changes to the container choreography Kubernetes can be directly applied, the disadvantage is to sacrifice part of the performance and flexibility. Current open source projects for secure containers include Kata Container, Google’s gVisor, and others.

There may be some overlap between the implementation of secure containers and lightweight virtual machines, but either direction is toward the convergence of virtual machines and containers. The goal is to improve the isolation and security of business applications while leveraging the container’s lightweight, fast delivery, and flexible scheduling capabilities.

3. review

Cloud native technology is a collection of optimal patterns for enterprise IT systems at the current technological stage. By following cloud native technologies and design patterns, enterprises can give full play to the advantages of cloud computing platforms, minimize the impact on development efficiency, and achieve stable and efficient systems. The technology is constantly evolving, and cloud native technology is an iterative process in which development habits and methods change accordingly. (Article from TWT Enterprise IT Community, thanks)