This article is the final delivery document of [the first Chinese DevOps Community Assembly Line Competition], which is the best practice of our team based on actual production

1. General description

1.1. Project Description

This project is a relatively complete assembly line extracted by the DevOps team of People’s Net based on the existing work. The function of the project is to add, delete, change and check the user information simply. The SpringBoot framework is used to develop the project, and the H2 database is used.

The corresponding API is:

The API list The path Request way Parameters that
Project home page / GET There is no
Query the user api/user/list GET There is no
New users api/user/create POST { “name”: “messi”, “age”: 30 }
Update user api/user/update POST { “id”: 1, “age”: 40 }
Delete user api/user/remove? id=${id} POST ${id} represents the user ID

Development environment access: dev-devops.baixing.cn:8088/

Test environment access: test-devops.baixing.cn:8088/

Pre-release environment access: stg-devops.baixing.cn:8088/

Production line environment access: prod-devops.baixing.cn:8088/

1.2. Assembly line description

  • Branching strategy

    • Take small, fast steps
    • In the process design, master is used as the release branch and release-* as the test branch, which is short-term
    • During release testing, if a bug was found in a feature, I would directly checkout of the release branch to fix it and re-join release

  • Pipeline specification

    Pipelining consists of the following steps:

    • compile
    • Code review && unit testing
    • Generate and commit the executable
    • Generate and submit the image file
    • Deployment Environment
    • Automated testing
    • Publish production (this step is only enabled when it goes live)
    • Clean up & TAG

1.4. Deployment and delivery

Deployment is practiced with GitOPs-related workflows, combined with Gitlab and ArgoCD for continuous delivery

1.5. Tool Chain

This section describes the tool chain used in this demonstration and the corresponding access address

  • Basic Environment:
    • EKS
    • ElasticStack: 8 a639b1c32ea43df8f6d9157eb6e2ef8. Ap – southeast – 1. Aws. Found. IO: 9243 / app/apm# / se…
  • code
    • Technical framework: SpringBoot
    • Database: H2
  • Continuous integration
    • Code version library: Gitlab: gitlab.com/baixingwang… (github.com/eaglewa/dev…).
    • Compilation tool: Maven
    • The code quality: Sonar: http://39.100.144.36/projects
    • Unilateral coverage: Jacoco
    • Product management: JFrog: http://52.82.40.171:8082/ui/packages/gav:%2F%2Fcom.baixingwang:devops-user-service? name=devops&type=packages
    • Project management: TAPD
    • Interface test: Yapi
    • Performance test: Jmeter
  • Continuous deployment
    • Container technology: Docker
    • Container declaration management: Kustomize
    • The deployment tool: ArgoCD:a28216abd3b7343ff97a18868cd62626-894254263.cn-northwest-1.elb.amazonaws.com.cn:8888/application…
    • Grayscale release: Flagger
  • observability
    • Visual metrics: Prometheus+Grafana: Ad853aa0d7e1c4e8aba01298e8753c3a-1552531560.cn-northwest-1.elb.amazonaws.com.cn:3000/d/vu8e0VWZk…
    • All link: Elastic APM: 8 a639b1c32ea43df8f6d9157eb6e2ef8. Ap – southeast – 1. Aws. Found. IO: 9243 / app/apm# / se…

1.5. Demonstration instructions

The following will give a detailed description of the entire assembly line operation process from ten dimensions and attach key screenshots for your reference

2. Demand management

2.1. Requirements Description

This demonstration project adopts a simple microservice as the demonstration of pipeline, with relatively simple requirements. Need to complete a [user management] module, need to achieve the following functions:

  • Manage user information, including adding, modifying, querying, and deleting user information
  • User information includes user name and user age

[Description] : The technology selection of this demonstration project is SpringBoot, and the in-memory database H2 is used as the database

2.2. Demand management

TAPD is used for demand management, which is divided into [demand] – [sub-demand] – [task]. Specific tasks are assigned to relevant r&d personnel and their working hours are evaluated.

2.3. Workload assessment

Work is done in full participation in Scrum planning meetings, after understanding stories and breaking down feature points, and all work must be transparent and committed to completion in the planning. Enter expected start and Expected end times in TAPD

2.4 Code base association

We associate requirements with the Gitlab codebase in TAPD and need to specify the associated requirements when team members submit code.

You can also see the statistics for the code submitted in the project

3. Code management

3.1 Code management tools

The code management library adopts Gitlab for version management, and the project code address is: gitlab.com/baixingwang…

3.2. Branch management

This demonstration project branch strategy is relatively simple, using [master] as the main branch strategy, simple explanation is as follows:

  • Feature branch
    • Function branch
  • The release branch
    • To measure branch
    • Compilation, testing, single-test coverage, code quality, test environment deployment, and more are all done on this branch
  • The master branch
    • The deployment of the branch
    • After completing the test process, merge it into the branch for online deployment

3.3 code review

The code needs to be merged into the Master branch during the online release. At this time, manual code review is required, and the online release process will be carried out after confirmation. This step is also the only part of manual intervention in this demonstration

4. Product management

4.1 Unified Product Library

There are two product libraries used in this project:

  • Project relies on artifact library: Nexus

  • Docker: Gitlab Registry

4.2 Version Number management

The software version number consists of four parts:

  • The first 1 is the major version number
  • The second 1 is the subversion number
  • The third 1 is the phase version number
  • The fourth part is the release version number, mainly including two, respectivelySNAPSHOTandRELEASE

For example: 1.1.1. RELEASE

Major Version number (1) : When a functional module is changed significantly, such as adding more modules or changing the overall architecture

Subversion (1) : Indicates that a function is added or changed, for example, the function of permission control or custom view is added

Phase version number (1) : Usually Bug fixes or some minor changes, revision should be released frequently at unlimited intervals, fixing a serious Bug can release a revision

RELEASE: Indicates the RELEASE phase of the current version of the software. When the software enters the RELEASE phase, the version number needs to be changed

4.3 Dependent component management

Configure the repository address in the code to ensure that the corresponding dependencies are downloaded from the artifact repository, refer to [pom.xml].

5. Construction method

5.1. Automate build scripts

This automation construction has been carried out since [Gitlab Runner], and relevant scripts refer to [.gitlab-ci.yml] under the project path:

Gitlab.com/baixingwang…

5.2 Module level reuse

There are two levels of reuse throughout the CI process:

  • Services that rely on JAR packages are implemented using CI cache to avoid downloading dependencies every time. The following figure shows that cache takes effect

  • After the project is packaged, it needs to be uploaded to [app.jar] for the generation of docker image. The following are two stages of dependency delivery

    At the end of the [Package] phase, app.jar is uploaded

    The docker-build phase downloads app.jar to build the image

5.3. Automatic build

Set daily automatic build tasks, the target is the Dev branch, after the completion of the build automatically deployed to the test environment

5.4. Build resource resilience

Gitlab Runner was used for construction, and different runners were automatically selected for parallel construction. Different Runners were used for the following two construction phases

Continuous integration

6.1 Integration on demand

The CI process is automatically triggered when a project team member submits a code change. Here are the CI processes triggered by two different project members

6.2 trigger mechanism

It is automatically triggered after submitting code changes, and the branch bit is limited to feature branch

6.3 Integration result push

The assembly line execution results are sent to project members by email or wechat. If the construction fails, the detailed information is directly associated with the corresponding construction task to check the failure cause. The following is a screenshot of the company’s wechat

  • Building a successful

  • Build failures

6.4. Automated testing

Automated unit tests are performed each time a code change triggers continuous integration, with a single test phase configured separately in pipeline configuration

Below is the execution log output and test results

7. Test automation

7.1 test plan

Test plans are managed using the TAPD platform, associated with requirements, use cases, and defects.

7.2. Unit Testing and coverage

Do unit tests against the Service layer and generate unit test coverage reports through Jacoco

Also generate coverage labels in the ReadME file

7.3. Interface test

YPAI platform is used for interface protocol management and automatic use case execution, and other operations are supported by self-developed services.

7.4. Performance test

Based on jmeter self-developed performance test platform, the performance test was carried out.

7.5. Test Report

Use TAPD to collect and send test report data.

8. Code quality control

8.1 Quality control tools

Sonar is used for code quality control in this project

At the same time, ali code specifications are introduced as code detection rules:

8.2. Automatic detection

Sonar automatic detection plug-in is introduced to create a separate task in the pipeline for automatic code quality control, in order to speed up, and unit test parallel processing

8.3 visualization

Check out the code quality report at Sonar

9. Deploy the pipeline

9.1 automation

The assembly line is automatically triggered after the code is submitted for change. In the whole life cycle of the assembly line, only the confirmation link of the online merged code needs human intervention, and other processes are automated

9.2. Multiple environments

This demo includes four environments:

  • The development environment
  • The test environment
  • Pre-release environment
  • Production line environment

See Part 10 for details.

9.3 Separating application and configuration

The project is deployed in K8S, and the configuration can be read through ConfigMap to achieve the effect of separating application and configuration

9.4 visualization

In Gitlab, the DAG diagram of the pipeline can be displayed

9.5 Grayscale release

The Flagger component is used for grayscale publishing. The grayscale publishing schematic and execution log are shown below

10. Environmental management

10.1. Environment determination

Four environments have been prepared for this demonstration project. The following are the environment descriptions and access addresses

The environment instructions Access to the address
Development Environment (DEV) Functional self-testing dev-devops.baixing.cn:8088
Test Environment Including functional testing, integration testing and stress testing test-devops.baixing.cn:8088
Pre-release Environment (STG) The regression before the line is basically consistent with the real environment stg-devops.baixing.cn:8088
Production Line Environment (PROD) Final production line environment prod-devops.baixing.cn:8088

10.2. Environmental delivery

Due to the limited number of machines, the environment delivery uses K8S namespace for logical isolation

Metrics visualization

The final online monitoring of this project adopts [Metrics] + [Log] + [Tracing].

11.1, logs,

Fluntd is used to collect log files and output them to the ES cluster. Developers can use Kibana to query logs of corresponding services

11.2, the Metrics

Buried data indicators are displayed through Prometheus+Grafana and can be customized

At the same time, data will enter ElasticStack and be displayed through Kibana

11.3, Tracing

Link tracing relies on APM components and is presented in Kibana