preface

Each company has its own Internal DevOps platform to regulate requirements creation, development and compilation, deployment, testing, rollout, and so on. Beetle is an internal DevOps platform, which greatly enhances product development efficiency and release quality. In order to achieve universality, Beetle makes certain mandatory specifications for continuous integration (CI) construction commands. However, the construction of The Rubik’s Cube project (internal low-code building platform) is complicated, which cannot fully comply with the specifications and other reasons. As a result, this platform cannot be accessed, which often leads to problems in the development, compilation, deployment and online stage. Finally, we tried to solve these problems first with Gitlab’S CI/CD.

Briefly describe the GITlab-CI process

1. Add the. Gitlab-ci. yml file to the root directory of the warehouse

2. Listen to the trigger time configured in the.gitlab-ci.yml file

3. Trigger (for example, push) and execute the configured job

Design ideas

In general, a.gitlab-ci.yml file is required for each repository.

However, due to business, we do not place a configuration file under each warehouse. I have created a new warehouse here, which is used to integrate the construction and deployment of all rubik’s Cube projects.

This design is not a conventional design, but to provide a way of thinking, we carefully use.

The detailed design ideas are as follows:

1, create a CI/CD repository and add.gitlab-ci.yml file

Create jobs for multiple warehouse projects and put the specific logic into sub-YML files

3. When each JOB is triggered, the re matches the push of a particular branch or variable

4, specific branch or variable rules: the name of the project to deploy + whether to install dependencies + deploy branches

5. All of the above operations are triggered by API passing in variables

The general flow chart is as follows:

Install the configuration

Take Linux as an example:

1. Install Gitlab-Runner on the physical machine where the project is compiled

sudo wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64
Copy the code

2, authorization,

sudo chmod +x /usr/local/bin/gitlab-runner
Copy the code

Create a gitlab-ci user

sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash
Copy the code

4. Install and start as a service

sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
sudo gitlab-runner start
Copy the code

5, register

sudo gitlab-runner register
Copy the code

Command display:

Obtain the information required for registration:

Project → Setting → CI/CD → Runner → Expand

As shown in figure:

Configuration information:

Post-registration Status (to be activated) :

6, the activation

sudo gitlab-runner verify
Copy the code

After the activation:

Creating a Configuration File

Create the repository and create the.gitlab-ci.yml file. Since we are deploying multiple projects, there is a lot of configuration and we split these tasks. Include keyword is used to achieve split, and finally our warehouse structure is shown as follows:

Configuration file code preview

. Gitlab – ci. Yml file

stages:
  - module-publish
  - install
  - build
variables:
  NODE_VERSION: "12.22.4"

  The following are the parameters passed in by the interface
  # $TRIGGER_JOB_NAME # the job name triggered by the API

include:
  - '.gitlab-ci.install.yml'
Copy the code

. Gitlab – ci. Install. Yml file

.script-common-install: &script-common-install
  - | nvm use $NODE_VERSION
dev-install-package-common:
  stage: install
  resource_group: dev-install-package-common
  script:
    - *script-common-install
    - echo "Universal dependency installation complete"
  only:
    refs:
      - /^.*\+module\+.*$/
    variables:
      - $TRIGGER_JOB_NAME = ~ /^.*magic-common-component.*$/
  tags:
    - magic-work
Copy the code

Configuration File Analysis

A brief description:

1. Define the Pipeline execution phase and the corresponding global variables, and import the external YAML file.

Dev-install-package-common JOB: dev-install-package-common JOB: dev-install-package-common JOB: dev-install-package-common JOB: dev-install-package-common JOB: dev-install-package-common JOB: dev-install-package-common JOB: dev-install-package-common JOB First check whether dev-install-package-common task already exists in other pipelines. If so, enter pending; if not, execute script in job in magic-work runner in corresponding stage.

Key stages

Description: The execution group used to define the JOB that will run in the defined stages.

If stages is not defined in the.gitlab-ci.yml file, the default pipeline stages are:

.pre → build → test → deploy → .post

Features:

Jobs at the same stage run in parallel.

The next phase of the job runs after the previous phase of the job completes successfully.

I define three stages:

The component publishes to the NPM library

2. Install dependencies

Compile the code

Key variables

Description: Defines variables that can be used in global and JOB.

Features:

If defined globally, each variable is copied to each JOB configuration when the pipe is created.

If the variable has been defined for JOB, the job-level variable takes precedence.

Key words include

Description: Modular, introduces external YAML files

Features:

Merge with the contents of the.gitlab-ci.yml file

No matter where the include location is, merge first, okay

A maximum of 100 can be nested

.gitlab-ci.yml overwrites the same keywords introduced by include

Include includes the following file types:

include:local
include:file
include:remote
include:template
Copy the code

Create a JOB

Description: Pipeline configuration starts with JOB, which is the most basic element of.gitlab-ci.yml

Features:

Jobs on the same stage will be executed in parallel. Jobs on the same stage will be executed successfully only when all jobs on the same stage are executed successfully

The JOB in stage

Description: Defines the phase in which the job runs

Features:

If stage is not defined, it is executed in the test phase by default

If the JOB is run on different runners, they can be run in parallel.

If you are running in a runner and want to run in parallel, you need to set concurrent

resource_group

Description: Creates a resource group for resource_group to ensure that jobs are mutually exclusive between different pipes of the same project.

For example, if multiple jobs belonging to the same resource group are queued at the same time, only one job is started. Other jobs wait until the resource_group is idle.

parallel

Description: Run jobs in parallel multiple times in a single pipe from job_name 1/N to name job_name N/N successively.

This property is useful for saving execution time by splitting a large task into several smaller tasks

Within the JOB script

Description: Specifies the command to be executed by the runner, the shell we use here

Features:

Can execute a single line command

Can input multiple line command, with the aid of – |

Support YAML anchors: & to define reusable scripts and * to introduce reusable scripts.

only

Description: Use ONLY to control when a job is added to a pipe

Supported types:

  only:refs
  only:variables
  only:changes
  only:kubernetes
Copy the code

We use ref and variables here

Fired when a matched branch is pushed or when a matched variable is passed in

Conditional support: Regular expressions and some conditional judgment methods

tags

Description: Select a particular runner to execute the script, the tag defined when registering runner

perform

Now that we have defined our configuration, we are using the push branch to trigger it. Let’s take a look at the Pipeline we are executing

→ CI/CD → Charge

As shown in figure:

Above we see that there are two tasks, but the second task remains pending until the first task completes.

This is unacceptable to us and will seriously affect our efficiency.

Jobs should be executed in parallel. Why are jobs still pending?

By default, the number of jobs run by gitlab-runner is 1. We need to modify the path:

vi /etc/gitlab-runner/config.toml
Copy the code

Modifying the execution value

concurrent=20
Copy the code

Run again, parallel processing

API to trigger

Through the above implementation, we can trigger the JOB through branch push, but there is still some operation cost. In order to make it easier to use, we decided to trigger the JOB through API.

Open API

Access Token

Once enabled, we can trigger the Pipeline through the interface

It is mainly divided into four types:

1, Use the cURL

curl -X POST \
     -F token=TOKEN \
     -F ref=REF_NAME \
     http://gitlab.xxx.com/api/v4/projects/id/trigger/pipeline
Copy the code

2, Use. Gitlab – ci. Yml

trigger_build:
  stage: deploy
  script:
    - "curl -X POST -F token=TOKEN -F ref=REF_NAME http://gitlab.xxx.com/api/v4/projects/id/trigger/pipeline"
Copy the code

3, Use webhook

http://gitlab.xxx.com/api/v4/projects/id/ref/REF_NAME/trigger/pipeline?token=TOKEN
Copy the code

Pass job variables

curl -X POST \
     -F token=TOKEN \
     -F "ref=REF_NAME" \
     -F "variables[RUN_NIGHTLY_BUILD]=true" \
     http://gitlab.xxx.com/api/v4/projects/id/trigger/pipeline
Copy the code

We’ve been pushing with different branch names, but now we can pass in different variables through the API to trigger different jobs and perform different project deployments

conclusion

This is the basic principle of our implementation, but there are some powerful features that are not mentioned in this article, such as cache, Kubernetes, and more comprehensive APIS. If you’re interested, you can check out the official documentation, which is very detailed. Continuous integration helps us reduce a lot of rework, reduce risk, build fast, etc. Use it. Write here first, wrong place also hope everyone more correct, progress together ~