Cloud original technology system seems to be messy and complicated, but in different perspectives reflect the main line of “affecting the whole body”. From the perspective of the timeline, the development of container technology gave birth to the cloud native trend of thought, which solved the resource supply problem at the bottom. Then open source Kubernetes became the standard specification of container layout. When the open application platform based on Kubernetes’ extensible ability was gradually enriched, it became the most important cornerstone of cloud native ecology. Subsequently, the core ideas of Service Mesh and Serverless technologies are more focused on delivering value on the business side — sinking more capacity into the infrastructure, allowing applications to be lightweight and cloud-enabled.

From the point of view of technical requirements, microservice architecture is the preferred way to solve the single complexity problem, but it brings a significant increase in the overall complexity of the whole system. Container technology and Kubernetes respectively solve the deployment of a large number of applications under the microservice architecture, as well as container management and scheduling problems. At the same time, Kubernetes also provides a better underlying support for the Service Mesh, with further downsizing of Serverless cloud protogenics and middleware capabilities of the underlying infrastructure.

The container

Containers are techniques for effectively dividing processes into separate Spaces in order to balance resource usage conflicts between separate Spaces. In essence, a container is a special process whose core function is to create a “boundary” by constraining and modifying the dynamic performance of the process. In addition, its resource limiting ability and “strong consistency” based on mirroring capabilities make container technology one of the most critical underlying technologies for cloud native.

Docker containers are often referred to as “lightweight” virtualization technology because of their virtual-machine-like isolation effects, but that’s not a serious statement. The Hypervisor is the most important part of a VIRTUAL machine. It simulates various types of hardware, such as CPUS, memory, and I/O devices, through hardware virtualization. A new operating system (Guest OS) is installed on these virtual hardware, and application processes running on the virtual OPERATING system are isolated from each other.

The difference between Docker and virtual machine is reflected in the different process isolation methods. Docker achieves process isolation by attaching additional Namespace parameters for applications, and there is no real “Docker container” running in the host machine. This “smoke screen” operation makes the process run as if it were in an isolated “container”, which reduces additional resource consumption and occupancy, giving it a significant advantage in terms of agility and high performance.

In addition, core container features include cGroups-based resource limiting capabilities and mirroring capabilities. Cgroups limits the resources that a process group can use, including CPU, memory, disk, and network bandwidth. The image function enables container technology to display “strong consistency”, that is, the image content downloaded from any place is exactly the same, and the complete environment of the original image maker is completely replicated, which breaks through the process of “development-test-deploy” and so on, making container technology become the mainstream way of software distribution.

Kubernetes

As container mirroring becomes the industry standard for application distribution, “container choreography” technology, which can define container organization and management specifications, becomes a key value node on the whole container stack. Major container orchestrators include Docker’s Compose+Swarm portfolio, Mesosphere’s Mesos+Marathon portfolio, the Kubernetes project led by Google and RedHat, and one built on top of Kubernetes OpenShift and Rancher projects. In the end, Kubernetes project stands out in the container choreography war and becomes the de facto standard for distributed resource scheduling and automated operation and maintenance due to its outstanding openness, extensibility and active developer community.

** The main design idea of the Kubernetes project is to define the various relationships between tasks in a unified way from a larger perspective, and leave room for supporting more kinds of relationships in the future. ** From a functional point of view, Kubernetes is better at handling the relationships between containers automatically according to the user’s wishes and the rules of the whole system, that is, the orchestration of containers, including deployment, scheduling and node cluster expansion and other major functions. Projects such as Mesos and Swarm are good at putting a container into operation at the best node according to certain rules, namely container scheduling. This is why the Kubernetes project finally stands out.

Kubernetes Core Competencies:

  • Service discovery and load balancing: Displays various application services through Service resources, and supports communication between containerized applications in combination with DNS and multiple load balancing mechanisms.

  • Storage choreographer: Support multiple types of storage such as local, NFS, CEPH, public cloud storage by plungin.

  • Resource scheduling: Sets required resources and resource limits for POD scheduling, supports automatic application publication and application rollback, and manages application configurations.

  • Automatic repair: monitor all hosts in the cluster, automatically find and handle exceptions in the cluster, replace the POD nodes that need to be restarted, and make the container cluster run in the expected state of the user.

  • Key and configuration management: Use Secret to store sensitive information and Use ConfigMap to store application configuration files, avoiding the need to lock configuration files in an image and increasing the flexibility of container orchestration.

  • Horizontal scaling: Provides flexible scaling based on CPU usage or platform level, for example, automatically adding or deleting Nodes.

The Kubernetes project consists of a Master controller Node and a Node compute Node. As the control management node, the Master consists of three independent components that work closely together: Kube-apiserver is responsible for API service, KuBE-Scheduler is responsible for resource scheduling, and KuBE-Controller manager is responsible for container scheduling. In addition, the persistent data of the cluster is processed by KuBE-Apiserver and stored in Etcd. Object information, such as Pod and Service. Compute Node as the workload of the project, Kubelet component is the most core part of Node, responsible for the Pod corresponding container creation, start and stop tasks, while closely cooperating with the Master Node to achieve the basic functions of cluster management.

Today, the Kubernetes project is not only the de facto standard for container technology, but also the cornerstone for the evolution of the entire cloud native architecture, redefining the possibilities for application orchestration and management in the infrastructure space. The Kubernetes project serves as a link between the preceding and the following in the whole cloud ecosystem. On the other hand, Kubernetes exposes formatted data abstractions of infrastructure capabilities, such as Service, Ingress, Pod, and Deployment, that are exposed to users by Kubernetes’ native apis. On the other hand, Kubernetes provides a standard interface for infrastructure capability access, such as CNI, CSI, DevicePlugin, CRD, so that the cloud can be used as a capability provider to access capabilities into Kubernetes system in a standardized way. With the development of microservices, DevOps and other technical concepts, open application platforms based on Kubernetes’ extensible capability will replace PaaS and become the mainstream, while the value of cloud will return to the application itself. More and more open source projects will develop, deploy, operate and maintain with the concept of cloud native, and finally evolve directly into a cloud service.

Micro service

Microservice is the product of the evolution of service architecture. After single architecture, vertical architecture and service-oriented architecture (SOA), microservice architecture (MSA) can be regarded as the distributed implementation of SOA architecture. As business development and demand continue to increase, the functions of individual applications become increasingly complex, and the efficiency of application iteration decreases significantly due to centralized r&d, testing, release, and communication modes.

Microservices architecture is essentially a trade-off for greater agility by embracing higher operational complexity. It has the advantage of being small and small and decentralized, but it also leads to a surge in infrastructure requirements, costs, and complexity.

So far, there is no unified standard definition of microservices. According to Martin Fowler’s description, microservices architecture is an architectural pattern/architectural style that develops individual applications into a set of small services that run independently in their own processes and communicate with each other using lightweight mechanisms such as HTTP apis. These services are built around specific businesses, deployed independently through fully automated deployment mechanisms, written in different programming languages, and with different data storage technologies, with minimal centralized management.

As Dubbo and Spring Cloud converge, more functionality will be sunk into the infrastructure

  • Spring Cloud

Spring Cloud is the leader of the first generation of microservices architecture, providing a one-stop solution for the realization of microservices architecture. As a whole family barrel technology stack, Spring Cloud provides developers with tools to quickly build a common model of distributed systems. Including configuration management, service discovery, fuse, intelligent routing, micro proxy, control bus, one-time token, global lock, leadership election, distributed session, cluster status, etc.

  • Dubbo

As an open source distributed service framework developed by Alibaba, Dubbo is committed to providing high-performance and transparent RPC remote service invocation solutions and SOA service governance solutions. Core parts include: remote communication, cluster fault tolerance, automatic discovery and so on.

In recent years, Dubbo ecosystem has been improving. In May 2019, Dubbo-Go officially joined Dubbo’s official ecosystem, and then implemented REST protocol and gRPC support, connecting Spring Cloud and gRPC ecosystem. The interoperability between the Go project and the Java&Dubbo project has been effectively resolved. Today, due to the emergence of Spring Cloud Alibaba, Dubbo will seamlessly integrate various peripheral products of Spring Cloud ecosystem.

Both Dubbo and Spring Cloud are more or less limited to specific application scenarios and development environments, lack support for generality and multilingualism, and only solve problems at the microservice Dev level. The lack of DevOps as a whole sets the stage for the rise of Service Mesh.

As a complete solution for microservice management and communication, Dubbo and Spring Cloud will coexist and converge for a long time, but some of the functionality they provide will gradually be replaced by infrastructure. For example, using Kubernetes’ service registration and discovery capabilities is much easier for microservices deployed on a Kubernetes cluster. With the Istio architecture, functions such as traffic management and circuit breakers will migrate to envoy proxies, and more and more functions will be stripped from the application and sunk into the infrastructure.

Service Mesh

Service Mesh is often translated as Service grid. As the infrastructure layer, Service Mesh is responsible for the reliable delivery of requests in the complex Service topologies of cloud native applications. By adding a Sidecar to the invocation path of the request, the Service Mesh submerges the complex functions originally completed by the client into the Sidecar to simplify the client and transfer the control of communication between services. When there are a large number of services in the system, the invocation relationship between services is presented as a Mesh. This is how the service grid gets its name.

We can summarize the definition of Service Mesh from the following features:

  • Abstract: The Service Mesh strips communication functionality from the application into a separate communication layer and sinks it into the infrastructure layer.

  • Function: The Service Mesh is responsible for the reliable delivery of requests and is functionally similar to the traditional library approach.

  • Deployment: The Service Mesh is deployed as a lightweight network proxy. It is deployed in a Sidecar mode with one-to-one applications. Communication between the two is called remotely through Localhost.

  • Transparent: The functionality of the Service Mesh is completely independent of the application. Upgrades, functions, and defects can be deployed independently. Applications do not need to pay attention to the implementation details of the Service Mesh, that is, it is transparent to the application.

The core value of Service Mesh lies not only in its functions and features, but also in the separation of business logic from non-business logic. The non-business logic will be stripped from the client SDK and run as a Proxy standalone process, thus sinking the capabilities that originally existed in the SDK into a container, Kubernetes or VM based infrastructure, enabling hosting on the cloud, lightweight applications to aid in application cloud biogenics.

Leading Service Mesh open source software includes Linkerd, Envoy, and Istio. Both Linkerd and Envoy embody the core concepts of Service Mesh directly, and are similar in terms of functions, i.e. Service discovery, request routing, load balancing, etc., solve communication problems between services, so that applications are unaware of Service communication. Istio takes a higher perspective and divides the Service Mesh into a Data Plane and a Control Plane. The Data Plane is responsible for all network communication between microservices, while the Control Plane is responsible for managing the Data Plane Proxy. Istio naturally supports Kubernetes, which Bridges the gap between the application scheduling framework and the Service Mesh.

Microservice implementation requires a complete set of infrastructure. When container becomes the minimum unit of work of microservice, Kubernetes, as a universal container management platform, can give full play to the greatest advantage of microservice architecture, making it a new generation of cloud computing operating system. Kubernetes not only supports running cloud native and traditional containerized applications, but also covers the Dev and Ops phases. The combination with Service Mesh provides a complete end-to-end microservice experience for users.

Serverless

Serverless generalizes the application scenarios of Service Mesh, not only for synchronous communication between services, but also for more scenarios with network access and realized through client SDK, including computing, storage, database, middleware and other services. For example, in the Serverless practice of Ant Financial, the Mesh mode also extends to Database Mesh (Database access), Message Mesh (Message mechanism), Cache Mesh (Cache) and other scenarios.

Previously, Serverless is usually regarded as a set of FaaS (function as a service) and BaaS (Back-end as a service). However, Serverless only defines a user experience, not a technology. FaaS and BaaS are just an implementation of Serverless. With the continuous maturity of Serverless technology, more and more applications using Kubernetes services will be transformed into Serverless applications.

Cloud native middleware

Traditional middleware is like a water pipe in a city, driving and managing the flow of data from one application to another, with high business coupling and no direct value for users. Into the era of the cloud, the software of isomerism, interconnection demand increased significantly, the middleware is endowed with new function definition, the function of the independent components, low coupling, modularity, and be sinking to the infrastructure, to achieve high performance, high availability, high scalability and eventual consistency of the key elements of a distributed application architecture.

From the perspective of function definition, middleware is a kind of computer software that connects software components and applications. It includes a set of services to facilitate the interaction of multiple software running on one or more machines through the network. It belongs to the category of reusable software. Cloud native middleware, including API, application server, TP, RPC and MOM, can also assume the role of data integration and application integration. Any software located between kernel and user application can be understood as middleware.

Along with the rapid development of the IoT, cloud computing technology, EDA (event driven architecture) is adopted by more and more enterprises, through the abstraction, asynchronous events, to provide business decoupling, accelerate the iteration, is also from vertical industry support to general key business application architecture, used in packaging applications, development tools, business process management and monitoring, and other fields.

EDA is often implemented by message middleware, which aims to use efficient and reliable message delivery mechanism to carry out platform-independent data communication. By providing message delivery and message queuing model, EDA can extend the communication between processes in distributed environment, and integrate distributed system based on data communication. Common messaging-oriented middleware include ActiveMQ, RabbitMQ, RocketMQ, Kafka, etc., which can be applied in cross-system data transmission, high-concurrent traffic peak cutting, data asynchronous processing and other scenarios.

In the era of cloud computing, cloud vendors provide encapsulation closer to business, and most of them use their own Serverless services to run event loads. Middleware capabilities can be easily realized through cloud services. Ali Cloud Function Compute, Azure Function and AWS Lambda all integrate event processing.

In the future, application middleware will no longer be a capability provider, but a standard interface for capability access. This standard interface will be constructed through HTTP and gRPC protocol, and decouple the access layer and application business logic of the whole Service through Sidecar, which is consistent with the idea of Service Mesh. Furthermore, the Sidecar model can be applied to all middleware scenarios, thus “sinking” the middleware capabilities into part of Kubernetes capabilities.

DevOps

As the cloud native open source ecosystem continues to improve and complex functions continue to sink into the cloud, the basic mode of software deployment, operation and maintenance is basically unified. Prior to DevOps, practitioners used waterfall or agile development models for software project development. DevOps, a combination of Development and Operations, is defined as a set of practices that automate processes between software Development and IT teams, build on a culture of collaboration between teams, and bridge the information gap between the Development side and the Operations side. In order to build, test, and release software faster and more reliably, has become the mainstream software development delivery model.

Overall, DevOps consists of three parts: development, testing, and operations. Specifically, it consists of several phases: continuous development, continuous integration, continuous testing, continuous feedback, continuous monitoring, continuous deployment, and continuous operation, collectively known as the DevOps life cycle.

The division and integration of DevOps functions are fully reflected in the information flow level. In the stages of development, delivery, test, feedback, delivery and release, the providers and receivers of all kinds of information use high-quality tools and systems to achieve smooth and accurate information transmission and efficient mechanized operation.

From the perspective of the above development philosophy, the idea of DevOps stems from the fact that the infrastructure layer is not strong enough or standardized enough, so the business side needs a set of tools to bond r&d, operations and the corresponding infrastructure. But as Kubernetes and infrastructure become more complex, the cloud native ecosystem becomes abstracted and layered, with each layer interacting only with its own data abstraction — a separation of concerns on the development side and on the operations side. The ever-expanding Serverless will also become a mindset and part of DevOps. In terms of capability, “light operation and maintenance”, “NoOps” and “self-service operation and maintenance capability” will become the mainstream mode of application operation and maintenance. On the application side, the application description is abstracted extensively on the user side, and the event-driven and Serverless concepts are split and generalized, which can be applied to a variety of scenarios beyond FaaS.