Let’s start by looking at the ancient build deployment process. I’m sure you’re all familiar with this:

  • Development source code after compilation, compression package generated package file
  • Upload the generated files from the package to the server

Obviously, this process is not only cumbersome, but also inefficient, and development takes a long time to deploy and build each release.

Later, to solve this problem, CI/CD came into being.

Let’s talk about itWhat is CI/CD?

CI/CD is short for Continuous Intergration/Continuous Deploy. CDS can also be interpreted as Continuous Delivery.

To be more specific,

  • Continuous integration: When code changes in the code repository, the code is automatically tested and built, and the results are reported back.
  • Continuous delivery: Continuous delivery is based on continuous integration, integrated code can be successively deployed to the test environment, pre-release environment, production environment

After chatting so much, I believe many students will say:

  • Isn’t that usually operations?
  • It’s not relevant to the business. What’s the point?
  • It’s all server stuff,docker,nginx, cloud server what, how should I learn?

A long time ago, I also think so, feel and their business has no relationship, there is no great need to understand.

But I recently encountered these problems while working on a full-stack project (which I did to break my own bottleneck) and found myself in a knowledge blind spot.

I had no choice but to catch up.

But as I learned these things and put them into practice on projects, I expanded my knowledge. I gained a new understanding of operating systems, actual construction and deployment, and even engineering.

Let’s put that down here as wellThe whole stack projectStructure diagram of:

This large project, with Low Code as the core, includes nine major systems including editor front end, editor back end, C-side H5, component library, component platform, back-end management system front end, back-end management system back end, statistical service and self-developed CLI.

The front end of the editor has been described in detail in how to design and implement the H5 marketing page building system.

At present, about 70% of the whole project has been completed. In the process, many problems have been encountered and great improvements have been made. There will be a wave of articles about the dots in the project, and they will be full of stuff.

Back to the topic of this article: Implementing front-end automated deployment test machines using Docker Compose, Nginx, SSH, and Github Actions. This article is based on the back-end management system front-end as a detailed description of how to use Docker, Nginx, Github CI/CD ability automation release a pure front-end project. This project was chosen to illustrate the automated release test machine for two reasons:

  • Background management system services are simple, so you can focus on automatic deployment processes
  • Pure front-end projects are more suitable for most of the front-end students present situation, take to use

The overall train of thought

Front-end code, packaged out of static files, available nGINx services. Ideas:

  • To build aDockerThe container (withnginx)
  • willdist/Directory copy toDockerIn the container
  • Start thenginxservice
  • Host port, corresponding toDockerContainer port, can be accessed

Core code changes:

  • nginx.conf(toDockerOf the containernginxUse)
  • Dockerfile
  • docker-compose.yml

⚠️ This paper will use the way of combining theoretical knowledge with practice, that is, first describe the corresponding knowledge points, and put the project code or configuration file related to this knowledge point.

Docker, Docker-compose, SSH, Github Actions, etc.

Docker

Who says the front end doesn’t need to learn Docker? There are detailed instructions. Here’s a quick explanation.

Docker can be regarded as a high-performance virtual machine, mainly used for Linux virtualization environment. Developers can package their applications and dependencies into a portable container and distribute them to any popular Linux machine. Containers are completely sandboxed and have no interfaces with each other.

You can do anything a server can do in a container, such as run NPM Run build packages in a container with a Node environment, deploy projects in a container with an Nginx environment, etc.

incentosInstalled on thedocker

Because the cloud server is centos, so here is how to install docker on centos:


$ sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine

$ sudo yum install -y yum-utils device-mapper-persistent-data lvm2

$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

$ sudo yum install docker-ce docker-ce-cli containerd.io

$ sudo systemctl start docker

$ sudo docker run hello-world
Copy the code

dockerfile

Docker uses Dockerfile as the configuration file for image construction, briefly look at a node application Dockerfile construction:

FROM node:12.10.0 WORKDIR /usr/app COPY package*.json./ RUN NPM ci-qy COPY.. EXPOSE 3000 CMD [" NPM ", "start"]Copy the code

Explain the meaning of each keyword.

FROM

So let’s start with this Image

WORKDIR

Setting the Working Directory

COPY

Copy the file

RUN

Execute commands in new layer

EXPOSE

Declare the container listening port

CMD

The container starts with the instruction default

Take a look at the Dockerfile file in your project:

# Dockerfile
FROM nginx

Copy the dist file to /usr/share/nginx/html/
# Therefore, NPM run build must be executed before packing out dist directory, important!!
COPY dist/ /usr/share/nginx/html/

# copy nginx configuration files
COPY nginx.conf /etc/nginx/nginx.conf

# Set time zone
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone

# create /admin-fe-access.log corresponding to nginx.conf
CMD touch /admin-fe-access.log && nginx && tail -f /admin-fe-access.log
Copy the code

In this file, we do the following:

1. We used Nginx Docker image as the base image.

Dist/Nginx Docker /usr/share/ Nginx/HTML /

3, put the custom Nginx configuration file nginx.conf into the Nginx Docker configuration folder /etc/nginx-nginx.conf.

4. Set the time zone.

Create /admin-fe-access.log, start nginx and use tail -f to simulate pm2-like blocking processes.

The nginx.conf file is mentioned here:

# number of nginx processes, usually set to equal the number of CPUS Error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; Pid logs/nginx.pid; Worker_connections 1024; worker_connections 1024; } # include mime.types; Default_type application/octet-stream; $remote_addr = $http_x_forwarded_for #$remote_user: specifies the user name of the client. #$time_local: access time and time zone; #$request: specifies the url and HTTP protocol used to record the request. #$status: indicates the request status. Success is 200, #$body_bytes_sent: records the size of the file body content sent to the client; #$http_referer: used to record links from that page; $http_user_agent: Log information about the client's browser; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; Keepalive_timeout 65; #gzip on; Client_max_body_size 20m; Server {# listen port 80; Server_name admin-fe; #charset koi8-r; Access_log /admin-fe-access.log main; [root/usr/share/nginx/html] [root/usr/share/nginx/html] # index index.html index.htm; Try_files $uri $uri/ /index.html; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}Copy the code

The core point is to listen on port 80 and define the log file as admin-fe-access.log and the root directory of the entry file as /usr/share/nginx/html, which correspond to one in the Dockerfile.

Having said Dockerfile and its associated configuration files, let’s take a look at some of the core concepts in Docker.

Docker core concepts

There are three very important concepts in Docker:

  • Image
  • Container
  • Repository

A picture to show the relationship:

If containers are compared to lightweight servers, then images are the template for creating them. A Docker image can create multiple containers, and their relationship is similar to the relationship between classes and instances in JavaScript.

Common commands for image:

  • Download image:docker pull <image-name>:<tag>
  • View all mirrors:docker images
  • Deleting a mirror:docker rmi <image-id>
  • Uploading an image:docker push <username>/<repository>:<tag>

If a docker image is < None >, run the Docker Image Prune to delete it

Container Common command

  • Start container:docker run -p xxx:xxx -v=hostPath:containerPath -d --name <container-name> <image-name>
    • -p Port mapping
    • -v Data volume, file mapping
    • -d Background running
    • –name defines the container name
  • View all containers:docker ps(add-aShow hidden containers)
  • Stop container:docker stop <container-id>
  • Delete container:docker rm <container-id>(add-fForced deletion)
  • View container information (such as IP address) :docker inspect <container-id>
  • View container logs:docker logs <container-id>
  • Go to the container console:docker exec -it <container-id> /bin/sh

After the image is built, it can be easily run on the current host. However, if the image needs to be used on other servers, we need a centralized service to store and distribute the image, and Docker Registry is such a service.

A Docker Registry can contain multiple repositories. Each repository can contain multiple tags; Each label corresponds to a mirror. So: Mirror repository is a place where Docker centrally stores image files, similar to the code repository we used before.

docker-compose

Docker-compose project is the official open source project of Docker, which is responsible for the rapid choreography of docker container clusters. Allows users to define a set of associated application containers as a project through a single docker-comemage. yml template file (YAML format).

The big advantage of using Compose is that you only need to define your application stack (that is, all the services your application needs) in a single file, and then place the YAML file in the root of your project, versioning along with the source code. Others simply clone the source code of your project and start the service quickly.

It is usually applicable to scenarios where the project requires a large number of running environments (corresponding to multiple Docker containers), such as relying on NodeJS, mysql, mongodb, redis, etc.

Put the docker-comemage.yml file here:

version: '3' services: admin-fe: build: context: . dockerfile: Dockerfile image: Container_name: admin-fe ports: -8085:80 # The host can connect to the database in the container using 127.0.0.1:8085Copy the code

Create an image based on the Dockerfile above, port mapping is 8085:80, where 8085 is the host port, 80 corresponds to nginx exposed port 80

Common commands

  • Build containers:docker-compose build <service-name>
  • Start all servers:docker-compose up -d(Background startup)
  • Stop all services:docker-compose down
  • View services:docker-compose ps

SSH and cloud server

First of all, the cloud server, since you want to deploy a key test machine, then there must be a test machine, that is, the cloud server, here I use Ali CloudCentOS 8.4 64Operating system.

With a server, how do you log in?

There are generally two ways to log in to the cloud server locally: password login and SSH login. But if you use the password login, every time to enter the password, more trouble. SSH login is used here. For details about how to log in to a remote server without password, see SSH Login Without password Configuration

Since then, every login can use SSH

@

to directly avoid secret login.

Cloud server installation specified package

Yum is used to install the cloud server base package, which is different from NPM.

docker

# Step 1: Sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest logrotate Sudo yum install -y yum-utils # Step 3: install the following tools: Add software source information, Use ali cloud image sudo yum - config - manager - add - 'http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # Step 4: Yum install docker-ce sudo yum install docker-ce docker-ce cli containerd. IO # Step 5: Start docker sudo systemctl start docker # Step 6: Run helloworld sudo docker run helloworldCopy the code

If you’re like me, there areHello from Docker!If soDockerThe installation is successful!

docker-compose

Through the visit https://github.com/docker/compose/releases/latest to get the latest docker – compose version (for example: 1.27.4), then execute the commands to install docker – compose

For docker-compose, download the latest version of docker-compose to the /usr/bin directory under curl -l https://github.com/docker/compose/releases/download/1.27.4/docker-compose- ` ` uname - s - ` uname -m ` - o /usr/bin/docker-compose license chmod +x /usr/bin/docker-composeCopy the code

After installation, enter the command linedocker-compose versionTo verify that the installation is successful:

node

First ensure access to the EPEL library by running the following command to install:

sudo yum install epel-release
Copy the code

You can now install Node.js using the yum command:

sudo yum install nodejs
Copy the code

Verify:

nginx

Yum nginx installation is very simple, just type a command:

$sudo yum -y install nginx #Copy the code

git

Also use yum to install:

yum install git
Copy the code

Finally, take a look at github Actions, which also connect the dots mentioned above.

github actions

As you know, continuous integration consists of many operations, such as pulling code, executing test cases, logging into remote servers, publishing to third-party services, and so on. GitHub calls these actions actions.

Let’s start with some terminology:

  • Workflow: Continuous integration of a running process, which is a workflow.

  • Job: A workflow consists of one or more Jobs. A workflow is a continuous integrated operation that can complete multiple tasks.

  • Step: Each job consists of multiple steps.

  • Action: Each step can execute one or more commands (actions) in turn.

workflowfile

The GitHub Actions configuration file is called the Workflow file and is stored in the. GitHub /workflows directory of the repository.

The Workflow file is in YAML format. The file name can be arbitrary, but the file name extension is. Yml, for example, deploy.yml. A library can have multiple Workflow files. GitHub automatically runs a. Yml file when it finds one in the. GitHub /workflows directory.

The Workflow file has many configuration fields. Here are some basic fields.

name

The Name field is the name of the Workflow.

If omitted, this defaults to the file name of the current Workflow file.

name: deploy for feature_dev
Copy the code

on

The ON field specifies the conditions that trigger workflow, typically push, pull_request.

You can qualify branches or labels when specifying triggering events.

on:
  push:
    branches:
      - master
Copy the code

The code above specifies that Workflow is triggered only when a push event occurs on the Master branch.

jobs

The Jobs field, representing one or more tasks to be performed. The runS-on field specifies the virtual machine environment required to run.

runs-on: ubuntu-latest
Copy the code

steps

The Steps field specifies the steps to run for each Job and can contain one or more steps. Each step can specify the following three fields.

  • jobs.<job_id>.steps.name: Step name.
  • jobs.<job_id>.steps.run: Command or action that the step runs.
  • jobs.<job_id>.steps.env: Environment variables required for this step.

Github /workflows/deploy-dev.yml

name: deploy for feature_dev on: push: branches: - 'feature_dev' paths: - '.github/workflows/*' - '__test__/**' - 'src/**' - 'config/*' - 'Dockerfile' - 'docker-compose.yml' - 'nginx.conf' jobs: deploy-dev: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Use Node.js uses: Actions /setup-node@v1 with: node-version: 14-name: lint and test # test run: | NPM NPM I run lint NPM run test: local - name: set the SSH key # temporary set up SSH key run: | mkdir -p ~/.ssh/ echo "${{secrets.COSEN_ID_RSA}}" > ~/.ssh/id_rsa chmod 600 ~/.ssh/id_rsa ssh-keyscan "106.xx.xx.xx" >> ~/.ssh/known_hosts - name: deploy run: | ssh [email protected] " cd /home/work/choba-lego/admin-fe; git remote add origin https://Cosen95:${{secrets.COSEN_TOKEN}}@github.com/Choba-lego/admin-fe.git; git checkout feature_dev; git config pull.rebase false; git pull origin feature_dev; git remote remove origin; Build prd-dev # NPM I; # npm run build-dev; Docker docker-compose build admin-fe Docker-compose up -d docker-compose up -d docker-compose up -d docker-compose up -d docker-compose. " - name: delete ssh key run: rm -rf ~/.ssh/id_rsaCopy the code

Here’s an overview:

1️ the whole process is triggered when code push to feature_dev branch.

2️ one job, running in the VIRTUAL machine environment Ubuntu -latest.

3️ first step is the basic action, actions/checkout@v2, which allows our Workflow to access our REPO.

4️ the second step is to install Node in the machine executing the workflow, where the action used is Actions /setup-node@v1.

5️ the third step is to implement Lint and test.

6️ The fourth step is to temporarily set SSH key, which is also to prepare for the next login to the server.

7️ the fifth step is deployment, which first SSH login server, pull the latest branch code, then install dependency, package, finally start docker, image generation. Here you have the Docker service on your test machine.

8️ The last step is to remove the SSH key.

The last togithubTake a look at the complete process:

Among themdeployStages are the core:

conclusion

Yangyang sasa wrote so much, also do not know you understand not 😂

If you have any questions, please leave them in the comments section and we will answer them at 😊

There will be many articles about this project in the future, so please pay attention