Today I’ll take a look at the evolution from traditional CI/CD to DevOps. Similar daily builds and smoke tests have been in practice for more than 10 years, such as the current Ant+CruiseControl approach to automated build and continuous integration capabilities.

Including the current DevOps process practices of continuous integration, the actual and original thinking has not changed much, AND I have previously given a summary of DevOps.

Overview of the DevOps process

First, let’s review a simple definition of DevOps:

DevOps (a combination of Development and Operations) is a group of processes, methods, and systems used to facilitate communication, collaboration, and integration between Development (application/software engineering), technical Operations, and quality assurance (QA) departments. It comes as the software industry increasingly recognizes that development and operations must work closely together in order to deliver software products and services on time.

DevOps has been around for years, and its main impetus still comes from two sources:

1. Driven by business and requirements, agile methodologies require shorter cycles, faster releases and deliveries.

2. The technical and operation departments need to link up and further promote the automation of this part of the trap with the development of PaaS and container technology.

Although it is the content of DevOps, it can be seen that DevOps itself is more to solve two kinds of coordination and automation problems, one is the coordination between development department and QA, and the other is the coordination between development department and operation and maintenance.

For the first problem, the focus is on CI/CD continuous integration and continuous deployment methodology, while for the second problem, it involves the automatic release and monitoring operation promoted by the development of cloud platform, micro-service architecture, container technology, and process improvement based on agile RESEARCH and development.

So DevOps is not just CI/CD, but more of a fusion of best practices.

Continuous integration and continuous deployment of CI/CD

Agile R&D and process collaboration

Micro service

Automated deployment and dynamic resource scaling based on container clouds

Automated test technology addresses QA and development synergy

While microservitization, container cloud, etc. are not currently necessary for implementing DevOps processes, microservitization and containerization have become the default standard specification for most current organizational R&D process improvement and development options.

So today’s article focuses on how the various process capabilities and technologies were overlaid from CI/CD to the current complete DevOps process architecture.

CI/CD continuous integration and continuous deployment

In the earliest team software developments, a developer or configuration manager was assigned to manually compile and deploy the software. To put it simply, the core is:

  • Update from configuration or source code base to the latest code

  • Code compilation and build

  • Deploy the compiled deployment package to the test environment

And the process itself can be automated, that is, the work done manually to the program automation to complete. You can see that there are three typical resources involved in configuring the code base, compiling the build environment, and testing the environment.

Secondly, the compilation process needs to be automated. The process of compilation, dependent packages for compilation, and compilation and construction sequence are configured through XML configuration files. The subsequent automatic compilation is based on the configuration content. From the earliest Ant to Maven, this is the same idea.

Therefore, the simplest automated compilation and deployment, the smoke test process is as follows:

Why do we need continuous integration tools?

As you can see from the figure above, even the simplest build and deployment often involves multiple steps. The image above requires first connecting to the SCM library, pulling groups and updating to the latest source code; Automatic compilation scripts need to be invoked for compilation, automatic deployment scripts need to be triggered for deployment, external automatic test scripts need to be invoked for testing after deployment, and reports need to be generated or emails need to be sent after the overall completion.

The whole process requires tools to automate composition and orchestration, namely CI/ CD-like tools.

Daily build and smoke test

The biggest difference between a daily build and a daily build is whether there is a smoke test, which the system must pass to be considered a daily build success. The tester’s manual intervention test is based on the smoke test passed.

Smoke test To verify the correctness of the entire build, smoke tests must be smoke tests for the entire system. But the smoke test only needs to pay attention to the main function of the system, through the smoke test does not mean that the system has no bugs, just that after passing the smoke test, the system can be said to be a stable version, that the daily construction of the system is successful, on behalf of the system can be transferred to special testers for testing.

Smoke testing usually requires automated test code or scripts prepared in advance. CI tools only integrate automated test scripts and output corresponding test results after the test is completed.

Binary-based environment migration

For CI/CD tools to deploy binary deployment packages to the test environment, notify developers to test. If the developer passes the SIT test environment, it may be necessary to deploy the version to the UAT acceptance test environment to notify the end user for acceptance testing.

Do you compile and build again at this point and deploy to the UAT environment? If this is not the case, there is no guarantee that the version passed by the tester in the SIT test is the version pushed to the user for testing in the UAT environment.

Continuous integration should be a single build, with binary deployment packages running in multiple places.

Therefore, the deployment version should not be repeatedly built, but directly use the deployment version that passed the test to deploy the UAT environment. However, at this time, different environments often have different configuration information, such as interface access address, database connection string information, etc., which are related to the environment.

These environment-related configurations need to be taken out of the WAR package and only need to be dynamically modified during environment migration or new environment deployment.

The whole process is as follows:

It can be seen that the whole continuous integration and continuous deployment actually need to complete the integration and collaborative work among multiple environments, so the design of the whole process of supporting integration had better be separated from the above environments, and a single Server Server should perform the design, arrangement, scheduling and operation of the whole continuous integration pipeline.

Further decoupling, continuous integration of nodes and artifact repositories

As mentioned earlier, continuous integration requires scheduling and orchestrating multiple environments and tools that are no longer suitable for a similar build environment and therefore need to be isolated.

Secondly, the results of automated compilation are not suitable for the build environment. The build and build environment itself is a temporary environment, which is not suitable for the complete management, version tracing, configuration and other operations of the built binary package. Therefore, the artifact store also needs to be moved out of the build environment to form a separate artifact store.

Based on the above two points, the overall process is as follows:

At this point, almost a complete continuous integration and continuous deployment is complete. For example, the core ideas of continuous integration and continuous delivery of the current mainstream Jekins are still the same as above, including more flexible pipeline design, integration with container cloud capabilities, and integration with automated testing tools, etc. added to the integration of the earliest version.

In traditional continuous integration, there is rarely a single artifact Server, but with The clustering of DevOps and containers, there is more emphasis on a separate artifact and mirror warehouse, which is the basis for the deployment of various environments and the dynamic scaling of resources.

At the same time, continuous integration Server is the core of the whole, through the Server to design and arrangement, to unify the scheduling of each capability unit, while realizing the compilation and construction process, and the earliest test, production environment collaboration.

Continuous integration classes: source code base, build environment, artifact library, CI/CD environment

Resource class: development environment, test environment, formal production environment

Continuous integration is ultimately about fully automated management of the entire development to delivery process.

And the container

For container cloud content, see other articles on the web. As continuous integration and continuous delivery and containers converge, you can see the following changes.

One is the shift from binary deployment packages to mirrored delivery

The second is to add a new mirror making process, commonly known as packaging

Originally the whole only two actions, compile build – deployment

After clustering with containers, the overall process changes to three key actions: compile build – package – deploy. Packaging is the process of image making. After the image is made, the image is pushed to the image warehouse. At the same time, the deployment task is initiated by finding the specific version from the mirror repository and deploying the version to the test environment.

The overall process is as follows:

You can see that the overall continuous integration and delivery process does not change significantly, except that the delivered units become mirrors, and the process of making mirrors is added.

Useful underlying environment resources have become container resource pools. These resources need to be managed uniformly, allocated, and dynamically scheduled based on resource usage loads. Therefore, Docker container cluster needs a unified management and orchestration tool, which is the current mainstream Kurbernetes to achieve container cluster management and container orchestration.

As shown in the figure above, code construction is realized through Maven, and Maven itself is integrated in Jenkins. The completed deployment package will be managed in Jenkins’ own deployment package repository. After this, Jenkins was further enabled to call the underlying Docker command to generate Docker image file, which was the key image file for our subsequent automatic deployment and continuous integration.

After this step is completed, it is connected to Kubernetes to achieve automatic deployment and dynamic scheduling of Docker image files.

One of the key aspects of continuous integration is environment migration. Note that each migration should be the same image file, while environment-related configurations should be configured separately or placed in OS environment variables. This was the only way to ensure that the final deployment package was the same one we had developed and tested without any changes.

If multiple nodes are enabled during dynamic deployment, load balancing capability is also required. Note that Kubernetes itself also provides load balancing and virtual IP routing capabilities. What we end up accessing is the domain name, and the domain name will eventually resolve to the actual compute node.

Collaborate with agile development processes

DevOps best practices are divided into several key process areas: r&d management, continuous delivery, and technology operations.

However, in practice, the most likely problem is not a single technology point, but the cross-domain collaboration problem, or the R&D process management and continuous integration delivery itself are inseparable two parts, we just divide them into different process areas for easy understanding and learning.

Therefore, pipeline design needs to clarify the synergistic relationship between R&D process management and automated continuous integration. The specific synergistic relationship between the two is illustrated in the figure below.

Understand that any new build deployment involves testers testing, testers submitting bugs after testing problems, and developers checking in code after fixing bugs, waiting for the next package deployment to form multiple iterations. The best way to do this is to minimize a lot of human communication collaboration and instead do it through tool chain collaboration.

For traditional continuous integration, the general best practice is to automate builds and smoke tests every night, while for current DevOps processes, pipelining can be started manually or automatically after the pipelining is designed, and pipelining execution times can be further reduced.

Let’s take this scenario a step further, assuming we automate an assembly line every 2 hours. Because for most organizations, even the most frequent build integration doesn’t require code changes to immediately trigger the level of build packaging.

Pipeline added start check node

Although the pipeline is performed every two hours, it is checked before startup. If there is no new code check in the last two hours, the pipeline will not be executed. Second, if the last pipelined execution instance was still pending or not closed, the pipelined work is also skipped.

Whether manual verification is required

Pipelining package deployment involves two aspects, one is new feature submission or new bug resolution, and only in this case requires manual verification. Therefore, if there are no new requirements or Bug state changes in a pipelined execution, the manual validation node should be skipped and shut down. Otherwise, it should jump to the manual verification link.

Requirements and defect management

Note that both new requirements and defects should be submitted and stateful, with requirements subdivided into specific requirement function points, and defects submitted by testers should correspond to specific requirement function points. After the development of a requirement is completed, the requirement itself is also in the state to be verified, but whether a requirement to be verified can be closed can only be closed after all the bugs of the requirement are solved.

Changes in requirements and defect states

Developers first complete requirements or defects, test themselves on the machine, and then check in the code to the SCM library. At the same time, the requirement or defect state is manually processed to the deployment-to-be-deployed state. After the pipeline is started, if the entire build is packaged and deployed successfully, requirements or bugs in the deployment state are transferred to the validation state after successful application deployment. After the deployment is complete, the tester can see the bugs or requirements to be verified, so he must enter the current test environment as the latest environment for defect verification.

Change-driven release development and pipeline design

For a change that involves only one microservice module, the continuous integration process is fairly straightforward, and we can streamline the design and implement it on DevOps. But the whole process gets a lot more complicated if you have more than one.

For example, we now receive one or more user change requirements, and after demand analysis, it is found that the three microservice modules that are actually affected need supporting changes to be completed. Then we can plan a development mini-version to solve this, that is, first the requirement will be split and corresponding to the tasks of the three microservice module changes.

In this change-driven scenario, we can keep the original large and complete top-level pipeline, but no compile-build operations are performed for versions with no code changes.

A large application involves multiple microservices, so it’s important to follow the no-change, no-recompile build principle. But what we actually see in many DevOps practices is that a change version development often rebuilds and packages the microservices that have not changed.

That is, the execution of a large line to a sub-line automatically skipped some of the sub-line execution. Of course, we can also re-plan a new top-level assembly line, select only the three microservice modules with changes for layout and design, and define the compilation and construction sequence of the three modules themselves according to the dependency relationship.

The entire top-level pipeline is executed by compiling, building, packaging and deploying all three change modules, and then driving subsequent automated testing, manual testing and verification. The entire implementation of requirements, defect modification process should be fully visible. In short, based on this change mini-release, several of the requirements change points submitted have been implemented, and we should have a clear idea of how many defects are still being addressed.

Integration with test management

Test management is also a key part of the entire DevOps process practice. There are three main categories.

Static code compliance, security detection

Automated testing (interface, UI)

Automated performance testing

Test automation is converted to drive test behavior into machine to perform a process, under the condition of the default operating system or application, execute the test and assessment of test results, so as to save resources and manpower and improve test efficiency and accuracy, mainly including the design automation, automation development, automation and automated analysis.

As you can see, automated testing is relatively easy to implement at the service interface and code levels, but more difficult at the front-end and UI levels. Therefore, for the early practice, we also suggest to implement interface service and code level automated testing first, and then rely on front-end UI automated testing.

Automated testing is relatively easy to implement for performance testing because scripts can be recorded in advance.

Test tasks and pipeline design

In general, code static checks can be performed when updated to the latest code from the repository before a build, and security class checks are performed.

After the build is compiled, you can execute subsequent automated test task scripts if the build is successful. It also recommends that unit test scripts such as Junit and automated test scripts of interface classes be executed first. However, performance testing does not need to be performed every time continuous integration is performed, so generally, scenarios involving performance testing need to be designed and executed independently.

From continuous integration to full DevOps support

Continuous integration/continuous deployment is still fundamental to the overall DevOps best practices. In order to achieve process management and continuous delivery throughout the R&D life cycle, there needs to be integration and collaboration with agile R&D process management and with back-end container cloud platforms.

When building a complete DevOps capability support platform, it is more important to integrate and integrate the capabilities mentioned above to achieve a complete end-to-end process capability support.

The DevOps platform itself is an integration and collaboration platform for many open source toolchains. The core of the DevOps platform is not to provide independent capabilities such as build, deploy, etc., but to integrate and collaborate with these capabilities. At the same time, a series of tasks such as multi-tenant, resource management, configuration management, and task scheduling should be considered when integrating these capabilities.

A DevOps support platform, when it comes to the overall architecture, is more about the integration and automation of capabilities around continuous delivery than it is about the creation of capabilities itself. There are several key pieces to this integration itself.

The first is the integration with Docker container platform to realize automatic deployment and dynamic migration between environments, including gray release, dynamic resource scheduling, clustering and other key capabilities. The second is integration with the microservices platform, similar to the registry and microservices gateway in the open source SpringCloud platform. Third, it includes integration with the technical components and services of the PaaS platform mentioned earlier. The fourth is the integration of all kinds of tool chains involved in the continuous delivery process, including configuration and code management, code static inspection, automatic compilation and construction, automatic testing, automatic operation and maintenance, automatic monitoring, log management and other tools.

A DevOps platform needs to provide complete support for source code management, development, build, package, deploy, test, and operations, and automate these processes through pipelining. A pipeline can be either a fully automated pipeline or can include manual processing and inspection nodes. Pipelining can visually design and arrange the above actions and tasks.