preface

Recently, I have been learning the back-end knowledge and developed a full-stack project of my own. When I need to release it online, I use the manual mode of Git pull code -> compile and construct -> restart in the server at the very beginning. If I release it frequently or I need to release it at the same time, I feel very cumbersome. So I thought, how can I automate all of this with a script so that I can type fewer commands

Introduction of ideas

Ali Cloud has a CodePipeline service, which can compile and build like Jenkins, but at the last step, if you want to deploy containers, it seems that you need to open clusters and so on, but the money needs to be saved for my girlfriend, so I give up the implementation of this part of research. , and then can only think of a relatively simple and violent method to achieve, combined with their own actual situation, a notebook to go all over the world, so through shell script should be a more appropriate solution

Raw material:

ssh shell docker node

Front-end code:

Compile dist file locally and use SCP command to put the file into the corresponding site directory of the server

Back-end code:

The image is compiled by dockerFile and uploaded to its own warehouse. Then the corresponding script of the server is triggered to restart the corresponding Container

Server:

With Ubuntu 16.04, create a folder with a list of scripts for your project and have it executed locally via SSH

implementation

Docker is used to facilitate the construction of the environment. Because we only have one server, we can use Docker to deploy and manage multiple projects calmly. The planning of Docker is shown in the following figure

Server Configuration

Configure SSH password-free login
  1. SSH /id_rsa.pub to ~/.ssh/authorized_keys

  2. Log in again

Install docker and save scripts
  1. Docker introduction and installation how to use the following will say
  2. Create a directory for executing scripts in the server’s preferred location, such as /server_sh in the root directory, and give 777 permission

The machine configuration

  1. Create a folder server_sh on the local computer and give 777 permission. At the same time, create a folder aliyun to store scripts on the Ali cloud server. This folder corresponds to server_sh on the server
  2. Create an automatic update server file named update_aliyun.sh under server_sh and run the SCP command to synchronize the shell scripts in the aliyun folder to the server. After modifying the scripts, run the./update_aliyun.sh command to complete the synchronization
#! /bin/sh
Update the server scriptSCP -r./aliyun/* root@(server IP address):/server_sh/Copy the code
  1. Create scripts to run docker for the first time and database type scripts to be installed in aliyun folder. Nginx, mongodb and mysql are used here
Note 1: The DB image run needs to use an internal or mount host directoryData volumeTo store db data otherwise the container will be deleted and the data will be lost
Note 2: container content communication needs to establish an internal network, otherwise every port will be exposed. For debugging purposes, all ports should be exposed. From a formal environment, only port 80 of Nginx should be exposedInternal communications
  • docker_create.sh
#! /bin/sh
# Install the docker run script for the first time

Create docker internal networkDocker network create --subnet 192.168.0.0/16local-network

Start the Web service
./webserver.sh

Start the mongodb database
./mongodb.sh

Start the mysql database
./mysql.sh

Copy the code
  • Webserver. sh(official nginx-based image as required, described below)
#! /bin/sh
# Start Nginx service

# Stop the running mirror
docker container stop webserver

Delete the old mirror
docker container rm webserver

Delete image to ensure that the latest image is available when booting
docker image rm masonchow/webserver

Download the latest image and start
docker run -d -p 80:80 --name webserver -v /dockerVolume/web:/web:ro masonchow/webserver

Copy the code
  • mongodb.sh
#! /bin/sh
# Stop the running mirror
docker container stop mongo

Delete the old mirror
docker container rm mongo

# delete image, make sure to get the latest image when booting, don't run if you don't need the latest image
docker image rm mongo

Install and start mongodb
docker run -d -p 27017:27017 -v /dockerVolume/db/mongo/data:/data/db --name mongo --network local- network - IP 192.168.0.101 mongoCopy the code
  • mysql.sh
#! /bin/sh
# Stop the running mirror
docker container stop mysql

Delete the old mirror
docker container rm mysql

# delete image, make sure to get the latest image when booting, don't run if you don't need the latest image
# docker image rm mysql

Install and start mysql
docker run -d -p 3306:3306 -v /dockerVolume/db/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root --name mysql --network local- IP network - 192.168.0.100 mysqlCopy the code
  1. Run the update_aliyun.sh command to synchronize the script to the server and run the docker_create.sh command on the serverdocker container lsYou can see the running container information

Project configuration

Nginx

Because there is only one server, if multiple front-end projects, each time to increase a front-end project, you need to change a nginx configuration, so according to the official Nginx to their own configuration, using scripts to customize the dockerfile structure an image update to their docker warehouse, Download and use it to the server

  • DockerFile
FROM nginx:latest

# Some nginx configuration
ADD conf/nginx.conf /etc/nginx/nginx.conf

#nginx site configuration
ADD conf/server_app.conf /etc/nginx/conf.d/default.conf

ADD conf/gzip.conf /etc/nginx/conf.d/gzip.conf

RUN cd / \ && mkdir /web 

WORKDIR /web

CMD ["nginx"."-g"."daemon off;"]
Copy the code

Here to monitor web directory is a docker to monitor local directory, is due to the above start nginx ran – v/dockerVolume/web: / web: ro

The front-end project

The front-end project only needs to synchronize the dist folder to the corresponding folder of the server after local construction. For example, I will establish a publsh.sh folder under the project folder and give 777 permission

#! /bin/sh
# delete local dependency packages
rm -rf ./node_modules

# reinstall
npm i

# Start building
npm run build

# Delete files of the remote deviceSSH [email protected]"Rm -rf /dockerVolume/web/ project folder"

# put the locally compiled files on the remote endSCP -r./wallet root@ Server IP address :/dockerVolume/web/Copy the code

The back-end project

The idea of the back-end project is to run the script of the project every time the update is released, and compile the backend into a new image by Dockerflie to update the docker warehouse to let the server docker run the latest image

  • publish.sh
# Start building
npm run build

Build the latest imageDocker build -t User name/image name.# publish mirrorDocker push username/image nameDelete the code after the build
rm -rf ./dist

New image of server realitySSH root@Server IP address"CD /server_sh &&./ Script on the server.
Copy the code
  • DockerFile
FROM node:8.9.3

COPY /dist /server

WORKDIR /server

ENV NODE_ENV=production

COPY /package.json /server

RUN npm install

CMD ["node"."./app.js"]
Copy the code
  • Scripts on the server
#! /bin/sh
Start backend services

# Stop the running mirrorDocker container stop Specifies the container nameDelete the old mirrorDocker Container Rm container nameDelete image to ensure that the latest image is downloadedDocker image RM User name/image nameDownload the latest image and start
docker run -d-p 3005:3005 --name Indicates the container name --networklocal-network -- IP 192.168.0.105 User name or image nameCopy the code

The last

Graduation more than a year, write an article for the first time, inadequacy, request to point out, very thank you