Continuous delivery and continuous deployment is a cliche in CI/CD and DevOps. The term continuous integration was first coined in 1994 by Grady Booch. In his paper Microservice published in 2014, Martin Flower, who proposed microservices, also provided a reference principle for continuous integration of software development. Continuous integration is the continuous automated compilation, packaging, build, test release of software projects with the help of tools to check the quality of software delivery. Continuous deployment is the process of automatically pushing tested code into production based on the benefits of continuous delivery. The key steps for each phase of continuous integration and deployment are described in detail below.

This article explores the stages of the CI (Continuous integration) /CD (continuous deployment) process; And why the CI/CD pipeline is essential to our organization from the perspective of fast, scale delivery.

The CI/CD pipelined workflow consists of a series of steps, including all phases at the beginning of the CI/CD process, responsible for creating an automated, coherent software delivery model.

With the CI/CD pipeline, software development can progress through the pipeline from code check-in, testing, construction and deployment to production. The concept is high because, once the pipeline is implemented, it can be partially or fully automated, speeding up the development process and reducing errors. In other words, the CI/CD pipeline makes it easier for organizations to handle automated, rapid, and continuous delivery of software.

DevOps engineers often confuse the CI/CD phases with their CI/CD pipeline. Although different tools can automate the staged CI/CD for each complex phase, the overall CI/CD software chain can still be broken due to inevitable human intervention. So we first need to understand the stages of the CI/CD process and explore why the CI/CD pipeline is essential to our organization from the perspective of fast, scale delivery.

CI/CD phase: Understand participants, processes, technologies

Enterprise application development participants are typically composed of developers, testers /QA engineers, operations engineers, and SRE (Site reliability Engineers) or IT operations teams. They work closely together with the goal of high quality software delivery. CI/CD is a combination of two separate processes: continuous integration and continuous deployment. The main steps in each step are listed below:

Continuous integration

Continuous integration (CI) is the process of building software and completing initial testing. Continuous deployment (CD) is the process of integrating the code with the infrastructure, ensuring that all testing is completed and policies are followed, and then deploying the code to the intended environment. Of course, many companies have their own processes, but the main steps are as follows.

CI: Code submission phase

Participants: development engineers, database administrators (DBAs), infrastructure teams

Technology: GitHub, GitLab, SVM, BitBucket

Flow: The code submission phase is also known as version control. Commit is the operation of sending the latest code changes written by the developer to the code repository. Each version of the code a developer writes is stored indefinitely. After discussing and reviewing changes with collaborators, developers write code and submit it when software requirements, feature enhancements, bug fixes, or change requests are completed. The repository that manages editing and committing changes is called a source control tool (configuration management tool). After developers submit code (code push requests), code changes are merged into mainline code branches, which are stored in a central repository like GitHub.

CI: Static code review phase

Participants: development engineers, database administrators (DBAs), infrastructure teams

Technology: GitHub, GitLab, SVM, BitBucket

Flow: After a developer writes code and pushes it to the repository, the system automatically triggers the next code analysis process. There are situations during development where committed code can build successfully, but the build fails during deployment. This is a slow and expensive process, both in terms of utilization of machines and human resources. Therefore, you must examine static policies in your code. SAST (Static Application Security Testing) : SAST is a white box testing method that can be used to examine code internally using SAST tools such as SonarQube, Veracode, Appscan, etc., to find software defects, vulnerabilities, and weaknesses (such as SQL injection, etc.). This is a quick check where the code is checked for syntax errors. Although this phase lacks the capability to check for runtime errors, this capability will be performed in a later phase.

Adding additional policy checks to the automated pipeline can significantly reduce the number of errors found later in the process.

CI: build

Participants: Development engineers

Technology: Jenkins, Bamboo CI, Circle CI, Travis CI, Maven, Azure DevOps

Process: The goal of the continuous integration process is to continuously build committed code into binaries or build artifacts. Continuous integration to check that new modules added are compatible with existing ones not only helps find bugs faster, but also reduces the time to validate new code changes. Build tools can create executables or packages (.exe,.dll,.jar, etc.) from the source code of almost any programming language. During the build process, YOU can also generate SQL scripts to test with the infrastructure configuration files. In short, the build phase is when the application is compiled. Artifactory storage, build validation tests, and unit tests can also be part of the build process.

Build Verification Test (BVT)/Smoke Test/Unit Test:

Perform the smoke test immediately after creating the build. The BVT checks that all modules are properly integrated and that key functions of the program are functioning properly. The goal is to reject seriously damaged applications so that the QA team does not waste time installing and testing software applications.

After these checks are completed, UT (unit test) is performed in the pipeline to further reduce failures in production. Unit tests verify that a single unit or component written by a developer performs as expected.

Build product store:

Once built, packages are stored in a central database called an Artifactory or Repository tool. As the daily build volume increases, it becomes more difficult to keep track of all the build artifacts. Therefore, once the build artifacts are generated and validated, they are sent to the repository for storage management. Repository tools such as Jfrog Artifactory can be used to store binary files such as.rar,.war,.exe, Msi, and so on. Testers can manually select from this point and deploy the build artifacts in the test environment for testing.

CI: Test phase

Participants: testers, QA

Technology: Selenium, Appium, Jmeter, SOAP UI, Tarantula

Process: A series of automated tests after releasing the build process will verify the code’s accuracy. This stage helps avoid errors in production. Depending on the size of the build, this check can last from seconds to hours. For large organizations with multiple teams submitting and building code, these checks are run in parallel to save valuable time and notify developers of errors as early as possible.

Testers (or QA engineers) set up automated test cases based on user-described test cases and scenarios. They perform regression analyses and stress tests to check for deviations from expected output. The activities involved in testing include integrity testing, integration testing, and stress testing. This is a high-level testing approach. At this stage, you can discover some code problems that developers have overlooked.

Integration test:

Integration testing is performed using tools such as Cucumber, Selenium, in which individual application modules are combined and tested as a group, while assessing their compliance with specified functional requirements. After integration testing, someone needs to approve that the update set in that group should be moved to the next phase, which is usually performance testing. This verification process can be cumbersome, but it is an important part of the process. There are many excellent solutions in the industry to validate this process.

Performance and pressure testing:

Automated testing tools such as Selenium and JMeter can also perform performance and stress tests to check that applications are stable and perform well under high loads. This testing process typically does not run on every update commit because full stress tests are long-running. When major new features are released, multiple updates are grouped and complete performance testing is done. In cases where a single update is moved to the next phase, the pipeline may optionally include canary tests.

Continuous deployment: Bake and deploy

Participants: Infrastructure engineers, SRE, and O&M engineers

Technology: Spinnaker, Argo CD, Tekton CD

Process: After the testing phase is complete, standard code that can be deployed to the server is ready. They will be deployed to a test or beta environment for use within the production team before being deployed into production. Before moving builds to these environments, builds must go through the Bake and Deploy subphases. Both phases are supported by Spinnaker.

CD: Bake

Baking is creating immutable mirror instances from source code at production time using the current configuration. These configurations could be things like database changes and other infrastructure updates. Spinnaker can trigger Jenkins to perform this task, and some organizations prefer to use Packer.

CD: deployment

Spinnaker automatically sends a bake image to the deployment phase. This is where the server group is set up to deploy to the cluster. Similar to the above testing process, the same functional process is performed during the deployment phase. Move the deployment first to test and then finally to production for approval and inspection. This process can be supported by tools such as Spinnaker.

CD: validation

This is also a key position for the team to optimize the overall CI/CD process. Because so many tests have been done now, failures are rare. However, at this point, all failures must be resolved as quickly as possible to minimize the impact on the end customer. The team should also consider automating this part of the process.

Deploy to production using blue-green deployment, Canary analysis, rolling update, and other strategies. During the deployment phase, running applications are monitored to verify that the current deployment is correct or that rollback is required.

CD: monitoring

Participants: Site reliability engineer (SRE), operation team

Technology: Zabbix, Nagios, Prometheus, Elastic Search, Splunk, Appdynamics, Tivoli

Process: In order to make software distributions fail-safe and robust, it is critical to track the health of distributions in a production environment. Application monitoring tools track performance metrics such as CPU utilization and release latency. The log analyzer scans a large number of logs generated by the underlying middleware and operating system to identify behavior and trace the root cause of problems. If any problems occur during production, stakeholders will be notified to ensure the safety and reliability of the production environment. In addition, the monitoring phase helps organizations gather intelligence on how their new software changes contribute to revenue, and helps infrastructure teams track trends in system behavior and capacity planning.

Continuous Delivery (CD) : Feedback and collaboration tools

Participants: Site reliability Engineer (SRE), operations and maintenance team.

Technology: JIRA, ServiceNow, Slack, email, Hipchat.

Process: The goal of the DevOps team is to deliver consistently faster and then reduce errors and performance issues. This is done by sending e-mails to developers and project managers from time to time to provide feedback on the quality and performance of new releases. Typically, feedback systems are part of the overall software delivery process. Therefore, any change in delivery is frequently entered into the system so that the delivery team can act on it.

conclusion

The enterprise must evaluate an overall continuous delivery solution that automates or facilitates the automation of these phases.

Write in the last

Welcome to pay attention to my public number [calm as code], massive Java related articles, learning materials will be updated in it, sorting out the data will be placed in it.

If you think it’s written well, click a “like” and add a follow! Point attention, do not get lost, continue to update!!