The concept of DevOps was first proposed in 2007, when the concept of cloud computing infrastructure was just proposed. With the growing popularity of the Internet, the demand for application software exploded, and the concept of software development gradually moved from waterfall to Agile development.

The traditional software delivery model (application developers focus on software development and IT operation and maintenance personnel are responsible for deploying software to the server) can no longer meet the needs of rapid iteration of Internet software. As a result, DevOps gained popularity as a cultural concept and best practice to break down the gap between R&D and operations, speed up the software delivery process, and improve the quality of software delivery.

The present situation of the enterprise

The popularity of DevOps is driven by the industry’s desire for agile application development and high quality delivery, thus creating a “common space” for development and operations to work closely together. Then software development is still belongs to a new industry, people are used to learn from the mature manufacturing, manufacturing way of solving large-scale production, is to build assembly line, through standardizing the production line each step, the content of the docking and assembly line workers only need to do their job, fast skilled to complete this part of content production.

So DevOps took a cue from the manufacturing industry and started building a continuous integration/continuous delivery (CI/CD) pipeline, leading to a series of automated/semi-automated tools (puppet, Chef, Ansible, etc.) that, combined with the scalability of scripting, normalized a lot of r&d and operations operations. So as to achieve the goal of mutual cooperation. But eventually someone had to invest in building these tools, and that’s where The DevOps team came in. The DevOps team builds tools and platforms that make it easier for r&d to get closer to the production environment, enabling r&d to deploy with one click and quickly trial and error in continuous integration, continuous delivery, and thus largely exposing and avoiding problems in the actual operation of software.

At its core, **DevOps is for operations. ** It provides the operation and maintenance process of the production environment through automated tools, shielding the infrastructure details, while making the problems of the software more easily exposed, thus giving these problems as early as possible to the r&d to solve. These, in fact, are helping to reduce the burden of operation and maintenance.

This model worked well at first, but over time problems emerged. DevOps themselves do not generate direct profits for the enterprise, nor do they add functionality to the product. They are the cost center of the enterprise, so many enterprises are reluctant to invest much in DevOps. Over time, DevOps failed to keep pace with the growing demands of developers and became a bottleneck in software development as the cloud and open source communities evolved. Think about how many technical people in large companies are happy with their “r&d effectiveness” tools.

The popularity of cloud computing

Smart companies can always from industry demand has found its own demand, AWS is born, so they in 2006 for the first time of network software deployment needs, computing and storage infrastructure as a service provided to users, allow anyone to buy server such as physical hardware building Internet applications, Scale makes the overall cost lower than user-built. It was in that year that the concepts of IaaS, PaaS and SaaS of cloud computing became gradually clear.

In the early stage of cloud computing, users mainly use IaaS services, such as virtual machines and storage devices. Enterprises using cloud computing services still need O&M to manage such infrastructure, but the o&M management objects are changed from physical machines to virtual machines. There is no essential difference.

And with the rapid development of cloud computing, cloud’s ability to complement, enhance unceasingly, gradually will be provided by the operations of all aspects of the original ability are converted into the cloud service, which naturally contains a complete life cycle management software of all kinds of services, from managed code, continuous integration, continuous delivery, to monitor, alarm, automatic scalability capacity and so on a series of ability, The corresponding services can be found on the cloud. Category of many, the number of huge, jaw-dropping.

But DevOps still has its uses. Cloud the butt of the difficulty is too big, and cloud services and more related to different cloud vendors provide service is not unified, in order to use the cloud products have to invest a lot of time to study, and to prevent the binding cloud vendors have to do more than the manufacturer’s adaptation, the conversation is still need to like the past of the complexity of the actual environment for the development of shielding, Only this time the infrastructure they are responsible for managing is cloud resources.

Kubernetes that changed everything

The essence of Kubernetes is modern application infrastructure, which focuses on how to integrate applications with the cloud naturally to bring out the maximum value of the cloud. Kubernetes emphasizes making infrastructure work better with applications and “delivering” infrastructure capacity to applications in a more efficient way, rather than vice versa. In the process, Kubernetes, Docker, Operator and other open source projects that play a key role in the native ecosystem of the cloud are pushing application management and delivery into a completely different situation: Kubernetes users simply describe their application’s end state declaratively, and that’s it. Kubernetes will take care of the rest.

That’s why Kubernetes places a lot of emphasis on declarative apis. In this way, the more capable the infrastructure into which Kubernetes itself is plugged, the richer the end state that the user of Kubernetes can claim, and the simpler his responsibilities become. Now, we can not only declare the final state of the application through Kubernetes, for example; “This application needs 10 instances”, we can also declare many operational endstates of the application, such as: “This application is upgraded using the Canary publishing strategy”, and “When its CPU usage is greater than 50%, automatically scale out by 2 instances”.

This challenges traditional DevOps tools and teams: If all a business developer needs to do is declare all the final states of their application, including the entire SLA, via a declarative API, and Kubernetes takes care of the rest automatically, why would he want to connect and learn from the various DevOps pipelines?

In other words, DevOps has long served as the glue between R&D and infrastructure. Now, Kubernetes is playing the glue role perfectly with its vibrant declarative API and unlimited access application infrastructure capabilities. It is also a reminder that the last “glue layer” that is being strongly challenged by the Kubernetes architecture is actually called “traditional middleware” : it is getting a big hit from the Service Mesh.

Will DevOps disappear?

In recent years, the Kubernetes project has often been described as DevOps’ “sweet spot.” Similarly, Kubernetes, like Docker, solves the problem of software runtime. This means that Kubernetes is more like a “fancy” IaaS, except that the runtime has changed from a virtual machine to a container. So, as long as you can connect existing DevOps ideas and processes to Kubernetes, you can enjoy the lightness and flexibility of container technology. This is obviously the best combination for DevOps, which advocates “agile.”

However, at least for now, Kubernetes’ path is not an IaAS-like role. It is concerned with access to underlying infrastructure capabilities, but it is not a provider of infrastructure capabilities itself. Furthermore, Kubernetes seems to be more concerned with the software lifecycle and state flow than the software runtime. Not only that, but it also provides a mechanism called a “controller model” for approximating the actual state of the software to the desired state, which is clearly beyond the scope of a “software runtime.”

The Kubernetes project’s “extra focus” on the application itself sets it apart from an IaAS-like infrastructure and makes it more clearly positioned as a “glue.” If Kubernetes is capable enough, is DevOps still necessary as the existing glue layer between r&d and infrastructure? In the so-called cloud native era, is app development and delivery really going to go “one announcement” and “leave it alone”, making DevOps disappear altogether?

But at least for now, the Kubernetes project has a lot of hurdles to overcome.

Limitations of the “Platform for Platform” API

Kubernetes is a typical “Platform for Platform” project, so its API is far from a pure r&d perspective. A Deployment object, for example, includes both images of interest on the R&D side, resource configuration on the infrastructure side, and even container security configuration. In addition, the Kubernetes API does not provide a way to describe and define “operational capabilities”, which makes the declaration of “hands-off” far out of reach. That’s why DevOps is still needed: Most fields in Kubernetes still have to be populated through a collaborative process of r&d and operations.

More cloud resources cannot be described

The native API of K8s covers only a small portion of cloud resources, such as PV/PVC for storage and Ingress for load balancing, but this is completely inadequate for a fully declarative application description. For example, developers who want to find a concept on K8s to express their needs for databases, VPCS, message queues, etc., are confused. However, all existing solutions are completely dependent on the implementation of cloud vendors, which brings new vendor-lock-in confusion.

The Operator system lacks interoperability

Kubernetes’ Operator mechanism is the open secret of this project’s ability to grow indefinitely. Unfortunately, the current relationship between all operators is like one chimney after another, with no possibility of interaction or collaboration. For example, we extended RDS on the cloud to the K8s declarative API with CRD and Operator, but when a third party wanted to write a CRD Operator that backed up RDS persistence files on a regular basis, there was no way to do it. This again requires DevOps’ system to step in and solve the problem.

In the future?

Obviously, the Kubernetes project still needs the DevOps system to truly complete the efficient iteration and delivery of software. This is inevitable: Although Kubernetes claims to be an “application-centric” infrastructure, its design and work level as a system level project derived from Google Borg is still more infrastructure territory. On the other hand, we can’t deny that Kubernetes has always pursued “NoOps” on the R&D side in its critical path. This desire has been “obvious” since the first day when it proposed the theory of “declarative application management”, and the establishment of CRD and Operator system makes this kind of application-level concern finally have the opportunity to land. We’ve seen many DevOps processes “sink” into declarative object and control loops in Kubernetes, such as the Tekton CD project.

If the future of Kubernetes is 100% declarative application management, it is reasonable to believe that DevOps will eventually disappear from the tech scene and morphose into a culture altogether. After all, any operations engineer at that time would have been a writer or designer of the Kubernetes Controller/Operator. What about r&d? They may not even know that Kubernetes was ever so prominent.

OAM

In October 2019, AliYun and Microsoft jointly released the Open Application Model(OAM), which broke the traditional software delivery Model. OAM is a standard specification focusing on describing the Application life cycle. This helps application developers, application o&M personnel, and infrastructure o&M teams better collaborate. At present, OAM is still in an early stage, and Alibaba team is contributing and maintaining this technology upstream. If you have any questions or feedback, you are also welcome to contact us upstream or Dingding. Participation:

  • Nail scan code into OAM Project Chinese discussion group



We look forward to your participation!

Cloud Native technology open class




Open Technology course

“Alibaba Cloud originators pay close attention to technical fields such as microservice, Serverless, container and Service Mesh, focus on cloud native popular technology trends and large-scale implementation of cloud native, and become the technical circle that knows most about cloud native developers.”