Non-professional operation and maintenance, not complex configuration, many pits, constant experiment and adjustment, not the best practice, the ultimate goal is to achieve the package deployment or migration of the project (vue&node), involving technologies:

  • Docker container
  • Jenkins Project Management and Automation
  • Nginx proxy
  • The Mysql database
  • Nodejs server
  • Yarn/NPM front-end package management
  • Git comes with no installation required
  • Pm2 static/dynamic service + process daemon

A few years ago, when I started to enter the web development, I started to rent a house online. At that time, the demand was not high. I rented ali’s student computer, and paid the rent at 9 yuan and 9 yuan a month.

Then Ali saw me graduate, the rent skyrocketed, the monthly rent of $9.90 turned into a few hundred, and he was forced to pack his bags and move out of Ali. At that time to learn that Tencent ☁️ is doing activities, students buy machine 1201 years, has not been a student of my attitude to try, did not think of the success of the purchase, thought so preferential where to find, once to sign for 3 years.Time once arrived 1202 years, my server is fast to expire again, like in those days Ali ☁️, Tencent ☁️ also picked up the abacus after three years, told me to renew 1 years hit 8 fold, 3 years hit 5 fold and so on of preferential treatment, can look at the order on that string of six digits more than a little amount, I know I can not afford.

So recently started looking for places to rent, but to live for a long time, what is more, to move a luggage is not like before said go go, have a lot of toys moved around quite troublesome, and experienced so much, I know that sooner or later, I will again move from one place to another place, move has become a problem need to face all the time, so I started looking for a moving company, Trying to move at a low cost and save time and effort, I finally found Docker.

Docker is a platform for running applications in an isolated environment called a container.

Containers are divided into three categories:

  • One is to put Mysql, separate database migration.

  • One is a container that integrates the Node environment, where Jenkins manages the project.

  • A separate Nginx container for proxies only.

The original plan was to migrate containers directly from server A to server B. However, only Nginx successfully migrated containers, and the other two containers failed to start. However, Docker saved me a lot of environment installation steps, mysql and Nginx were also easy to install manually on server B, so the actual migration only migrated SQL files and nginx configuration files, while Jenkins I simply packaged the entire Jenkins_home.

First, install Docker

curl -fsSL https://get.docker.com | bash -s docker --mirror aliyun
Copy the code

Run after installation

sudo systemctl start docker
Copy the code

Common operations

Docker ps view docker ps - a running container list all container docker stop < name > | docker start < name > | docker restart < name > literally docker rm <id> Delete container docker RMI <id> delete imageCopy the code

Common parameters

-d Daemon -uroot Log in to the container as root -p xx:xx host xx port mapping xx port of the container -v Directory mappingCopy the code

Install Jenkins

Update: This step is mainly to package Jenkins_home, then migrate to the Jenkins mapping directory of the server to decompress

docker pull jenkinsci/blueocean
Copy the code

I added –net=host to the actual runtime, because my services will be running in this container

docker run -d --name back-place -u root -p 9090:8080  -v /var/jenkins_home:/var/jenkins_home jenkinsci/blueocean
Copy the code

–name is followed by the container name, which can be modified at will after creation. -p indicates that Jenkins will run on port 9090. And it takes up port 8080 in Docker.

If you look at the current Docker process, you can see that the back-place container is already running:

docker ps
Copy the code

The back-place can then be treated as a virtual machine, and we can enter the bash environment of the container:

docker exec -it back-place /bin/bash
Copy the code

After some torturing, Jenkins was initialized, Github generated a new token, SSH, Git configuration (it seems that docker instance Jenkins had Git when it was found, and the command to find the path was “which Git”), and project by project build configuration replication, etc.

If github is too slow, you can use generic-webhook-trigge hook code cloud. Generally speaking, github does not need HTTPS connection. This is a trap.

Leaving the bash environment:

exit
Copy the code

Install Mysql and migrate the database

Note: This step is actually to back up mysql files on the old server.

Log in to mysql and enter the password

mysql -u root -p 
Copy the code

Type status to see the version number:

Or in Linux, mysql -v can be seen directly. Note the capital V. I will copy the version number and install it directly, but I still recommend the official website to find the related version of the image to install a bit more stable:

Docker pull mysql: 5.7.34Copy the code

Log in to old mysql, guide your OWN SQL file, shut down Linux mysql, release port 3306

Service mysqld stop The operation varies depending on the versionCopy the code

Mysql > select * from mysql.mysql > select * from mysql.mysql > select * from mysql.mysql

Run Docker Images to see if the installation was successful

Change the password and start running:

Docker run -itd --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7.34Copy the code

123456 is the root password. Change 5.7.34 to your own version

Import the database file using bash or an external tool.

Install the Node (Alpine)

docker exec -it back-place /bin/bash
Copy the code

Into the container, one operation (here is good, step on the pit, the previous steps are wrong) :

Wget https://nodejs.org/dist/v14.17.1/node-v14.17.1.tar.gz...Copy the code

Nodejs.org can be pinged from inside the container, but it can’t be downloaded. It seems to be a problem with the container’s network mode Settings. The default is bridge mode, and the container will have to be reinstalled to use the host’s network namespace. Back to the host Linux:

docker inspect -f '{{.Id}}' back-place
Copy the code

Then copy the file over the LAN:Docker cp Local file path ID Full name: container path

Docker cp node - v14.17.1. Tar. Gz afe6f75d24f72b48826a5396c62e8efeb35c2cba30b5460bf20378817a8881f1:/usr/local/src/node.tar.gzCopy the code

Back to the docker container in back-place:

Docker exec it back-place /bin/bash CD /usr/local/src tar ZXVF node.tar.gz CD node-v14.17.1./configure -- prefix = / usr/local/node / 14.17.1Copy the code

I used to install Node based on CentOS, which has been integrated with The Python environment. Suddenly I found something wrong. I checked the current Linux version of docker container:

Boy, did I check the package management tool for each system version:

  • Centos: yum!
  • Ubuntu: apt – a get
  • Alpine: apk

I am a helpless dividing line

The following operations must be performed in sequence. Otherwise, NPM and Node may not match.

Recommended USTC mirror sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g'/etc/apk/repositories apk update && apk upgrade ----------- The following is Tsinghua Mirror, Node version has a problem echo "https://mirror.tuna.tsinghua.edu.cn/alpine/v3.8/main/" > / etc/apk/repositories apk update && apk upgradeCopy the code

To install node, execute:

apk add nodejs
Copy the code

Manual installation of NPM is required

apk add --update npm
npm config set registry https://registry.npm.taobao.org
Copy the code

Node Installation Complete


Check the node version. If it is 8(uSTC image is ok), manually upgrade the latest stable version of Node

npm install -g n
n stable
Copy the code

If you encountered an error in the previous step, try running the following command (there are really many pits)

npm config set unsafe-perm true
Copy the code

Run the following command as prompted, or the global Node command will remain the same

PATH="$PATH"
Copy the code

Install the PM2

Processes used to guard individual services in the Jenkins container.

npm install pm2 -g
Copy the code

Common commands

Pm2 list pm2 start. / XXX pm2 restart all pm2 flush # Delete all log files pm2 delete 0Copy the code

Running static service

pm2 serve <path> <port> --name  <name>
Copy the code

If your Vue project is a history route, run this way

pm2 serve --spa <path> <port> --name  <name>
Copy the code

Running dynamic service

pm2 start <path>
Copy the code

Eg: pm2 start /home/service.js

Nginx installation (Alpine failed)

I was halfway through the installation when Linux crashed. After trying for a long time, but still not working, I finally decided to make Nginx a separate container.

In this way, several containers were destroyed. Although I was very unconvinced, I finally chose to give up TvT

Install Nginx (Docker)

First, stop the host Nginx service and Docker pulls an image

docker pull nginx
Copy the code

Instantiate the mirror and bind port 80 (bridge mode)

docker run --name nginx -p 80:80 -d nginx
Copy the code

The bridge mode requires the port to be specified. Considering that more ports may be configured in the future, it is more convenient to use NET mode directly for the instance

docker run --name nginx -d --net=host nginx
Copy the code

Go to the Nginx container and view the configuration directory

docker exec -it nginx /bin/bash
nginx -t
Copy the code

You can see that the configuration file is in the /etc/nginx directory, modified as needed, or docker cp is used to migrate the original nginx configuration.

Docker inspect -f '{{.id}}' nginx docker cp < local file path > < Id full name >:< container path > docker cp /conf/nginx.conf XXXXXXXXXXXX :/etc/nginxCopy the code

Import/export (abandoned)

Before exporting, check the COMMAND used to copy the container, which is used during importing

docker ps -a --no-trunc
Copy the code

Docker export

>

docker export 7691a814370e > /home/test.tar
Copy the code

Here is the image file: test.tar, put the image file into A static service, and pull the image installation of server A remotely from server B (ideal, but in fact only the nginx container was successfully grafted, so it was abandoned).

Docker import


docker import http://xxx.com/nginx.tar test/nginx
Copy the code

Since the image is a container instance, import runs with the container’s COMMAND at the end:

docker run --name=xxx -d xxx /docker-entrypoint.sh nginx -g 'daemon off; 'Copy the code

Large file transfer

The first package would look something like this:

And then put it on a static service and it won’t download:

In fact, I only migrated the Jenkins_home directory (can be seen as all Jenkins configurations, I compressed about 200 MB). Remember to empty the workspace, this part is more occupying capacity, and then compress it with gzip. The effect was Amazing:

The compression

gzip -c mysql.tar > mysql.tar.gz
Copy the code

Unpack the

gunzip mysql.tar.gz
Copy the code

Large file transfer between servers can be implemented using SCP:

SCP mysql.tar.gz root@ IP address of server B :/data/Copy the code

Gzip can only compress files and use tar for packing directories:

tar -zcvf jk.tar.gz /var/jenkins_home
Copy the code

The -zcvf abbreviation stands for gzip compression and unzip:

tar -xzvf
Copy the code

At the end

At the beginning, the plan was to put all online services into independent docker containers and migrate them at one time by cloning images. The purpose was to reduce the time of redeployment, installation and various configurations. In the end, although it was not as good as the ideal, the goal was achieved: Move at the lowest cost, and keep my original ecology with the lowest changes — unified project management, automatic deployment, etc. It is not complicated to use the Docker installation environment in the new server, only a small amount of configuration (such as sSH-key) needs to be redone, so that the next time the server needs to change, it will be much easier.