Summary: The DevOps culture and the automation tools and platform capabilities that underpin it play a key role in the growing popularity of cloud native architectures.

Written by Xi Yang

Cloud native is becoming an accelerator for enterprise business innovation and solutions to scale challenges.

The changes brought by cloud native are not limited to the technical aspects such as infrastructure and application architecture, but also to the changes in r&d concept, delivery process and IT organization, as well as promoting the changes in enterprise IT organization, process and culture. Behind the growing popularity of cloud native architectures, the DevOps culture and the automated tools and platform capabilities that underpin it play a key role.

Cloud native brings research and development and operation and maintenance collaboration interface

Compared to cloud native, DevOps is nothing new, and its practices have long been embedded in modern enterprise application architectures. DevOps emphasizes communication and rapid feedback between teams, and aims to respond quickly to business needs, deliver products and improve Delivery quality by building automated Continuous Delivery and streamlined application release. With the scale adoption of container technology in the enterprise, capabilities such as cloud programmable infrastructure and Kubernetes declarative apis accelerate the convergence of development and operations roles.

The trend of cloud native makes the cloud become the standard of enterprises, and IT is inevitable to define the next generation of R&D platform around cloud native, which also forces further changes in the way IT is organized — new platform engineering teams begin to emerge. In this context, how to practice DevOps more efficiently in the cloud native environment has become a new topic and pursuit.

Evolution of next-generation DevOps platforms

With the gradual improvement of the Kubernetes ecosystem from the bottom to the application layer, the platform engineering team can more conveniently build different application platforms based on business scenarios and the actual needs of end users, but it also brings challenges and troubles to the upper application developers.

The Kubernetes ecosystem itself has a rich pool of capabilities, but there is no scalable, convenient way to introduce a consistent upper level abstraction to model application delivery for a hybrid, distributed deployment environment in a cloud native architecture. The lack of upper-level abstraction in the application delivery process prevents The complexity of Kubernetes from being shielded from application developers.

The figure below shows the typical flow of a DevOps pipeline in the cloud native. First, the code was developed. The code was hosted to Github, and then Jenkins, the unit testing tool, was connected. At this time, the basic research and development had been completed. Then it goes to the construction of the image, which involves configuration, choreography, and so on. Applications can be packaged with HELM in cloud native. Packaged applications are deployed to various environments. But there are many challenges along the way.

First, different operations and maintenance capabilities are required in different environments. Second, to create a database on the cloud during configuration, open a separate console to create the database. You also need to configure load balancing. After the application is started, you need to configure additional functions, including logging, policies, and security protection. As you can see, cloud resources and the DevOps platform experience are separate, filled with processes built with external platforms. This is very painful for beginners.

The traditional DevOps pattern before containers required different processes and workflows. Container technology is built from a DevOps perspective. The functionality provided by abstract containers will influence how we look at DevOps, as traditional architectural development will change with the advent of microservices. This means following the best practices of running containers on Kubernetes and extending DevOps to GitOps and DevSecOps to make Cloud native DevOps more efficient, secure, stable, and reliable.

OAM (Open Application Model) attempts to provide a cloud native Application modeling language, in order to achieve the separation of the perspective of RESEARCH and development and operation, so that the complexity of Kubernetes does not need to pass through to the research and development, operation and maintenance by providing modular, portable, extensible characteristics of components, Support a variety of complex application delivery scenarios to achieve agile and platform-independent cloud native application delivery. Its full implementation on Kubernetes, KubeVela, has been recognized by the industry as a core tool for building the next generation of continuous delivery and DevOps practices.

Recently, Ali Cloud released AppStack, a cloud effect delivery platform, at the 2021 Cloud Computing Conference, aiming to further accelerate the large-scale implementation of enterprise cloud native DevOps. According to the cloud effect with the delivery platform AppStack R & D team introduction, it fully supports native Kubernetes and OAM/KubeVela at the beginning of the design, in order to achieve the application deployment architecture without binding, no intrusion, so that enterprises do not worry about migration and technical transformation costs. This is a sign that KubeVela is becoming an important cornerstone of application delivery in the cloud native era.

Based on KubeVela, build an application-centered delivery system

In today’s rapid expansion in the cloud native concept, mixed environment deployment (hybrid cloud/cloudy/distributed cloud/edge) has become the most enterprise applications, the inevitable choice of SaaS services, application of continuous delivery platform, and cloud native technology development trend is also towards “consistent, across different cloud, environment application delivery” stride forward.

KubeVela, an out-of-the-box application delivery and management platform for modern microservices architecture, has released version 1.1. In this release, KubeVela focuses more on the application delivery process for a mixed environment, bringing multiple cluster delivery, delivery process definition, grayscale publishing, public cloud resource access and many other out-of-the-box capabilities and a more user-friendly experience to help developers from the initial stage of “static configuration, templates, glue code”. Directly into the next generation of workflow-centric delivery experiences that are “automated, declarative, unified model, and naturally multi-environment oriented.”

Based on KubeVela, users can easily handle the following scenarios:

Multi-environment, multi-cluster application delivery

Multi-environment, multi-cluster delivery for Kubernetes is a standard requirement. Since version 1.1, KubeVela not only enables multi-cluster application delivery, but also can work independently and directly manage multiple clusters, and integrate OCM, Karmada and other multi-cluster management tools for more complex delivery actions. In addition to the multi-cluster delivery strategy, users can define Workflow to control Workflow steps such as the order, conditions, and delivery to different clusters.

Defining delivery Workflow (Workflow)

Workflow can be used in many scenarios. For example, in a multi-environment application delivery scenario, you can define the delivery sequence and preconditions for different environments. KubeVela’s workflow is cD-oriented and declarative, so it can be directly connected with CI systems (such as Jenkins, etc.) as A CD system, or embedded into the existing CI/CD system as an enhancement and supplement. The landing method is very flexible.

Workflow is composed of a series of steps on a model, and each Step is an independent capability module whose specific types and parameters determine the capabilities of a specific Step. In version 1.1, KubeVela’s built-in Step is already rich and easy to extend, helping users easily connect with existing platform capabilities and make seamless migration.

Application-centric cloud resource delivery

KubeVela is designed from an “application-centric” perspective, so it helps developers manage cloud resources better and more easily in a completely Serverless manner, rather than juggling a variety of cloud products and consoles. In terms of implementation, KubeVela integrates Terraform as a built-in cloud resource orchestration tool, and supports the deployment, binding and management of hundreds of different types of cloud services by various cloud vendors with a unified application model.

In terms of usage, KubeVela currently divides cloud resources into the following three categories:

  • As components: database, middleware, SaaS services, etc. For example, The Alibaba-RDS service in KubeVela falls into this category
  • As operation and maintenance features: such as log analysis, monitoring visualization, monitoring and alarm services
  • As application running infrastructure: Kubernetes managed cluster, SLB load balancer, NAS file storage service, etc

Easier landing of GitOps continuous delivery practices

As a declarative application delivery control plane, KubeVela is naturally available in the form of GitOps (either alone or with tools like ArgoCD), And it can provide more end-to-end capabilities and enhancements to GitOps scenarios, helping the GitOps concept land in the enterprise in a more user-friendly and practical way. These capabilities include:

  • Define the application delivery workflow (CD pipeline)
  • Handles dependencies and topologies during deployment
  • Simplify application delivery and management by providing a unified upper level abstraction over the semantics of existing GitOps tools
  • Declare, deploy, and bind cloud services in a unified manner
  • Provide out-of-the-box delivery strategies (Canary, blue-green releases, etc.)
  • Provide out-of-the-box mixed environment/multi-cluster deployment strategies (placement rules, cluster filtering rules, cross-environment Promotion, etc.)
  • Kustomize-style patches are provided in multi-environment delivery to describe deployment differences without the user learning any details of Kustomize itself

KubeVela version 1.2 is coming

The goal and vision of the Kubevela project is to continue to build a natural enterprise application operating system for a hybrid environment and to allow developers to enjoy the process of delivering applications. In the upcoming 1.2 release, KubeVela will bring application-centric control panel UI to facilitate enterprise application assembly, distribution, and delivery processes, providing developers with a simpler application delivery experience, while covering more usage scenarios such as edge application delivery.

KubeVela version 1.2 will be released at KubeCon China in December 2021, so stay tuned to the KubVela community and alibaba Cloud native news!

The original link

This article is the original content of Aliyun and shall not be reproduced without permission.