First to explain the application scenario, I think most of the students who read this article are online can not find Docker + Node + Nginx + Mysql this combination of configuration teaching, including my research in this project when looking for a long time only a few valuable foreign articles, at the end of the article I will put some reference posted for everyone to learn. The project started because the team wanted to have a cloud platform of its own, equipped with some services such as picture uploading, OSS management, release management, nail push service, voice synthesis, etc., and these services did not require back-end participation in our imagination. Students in the Team have different levels of server. Ten people in the same architecture can create ten environments (I believe this is a common problem of most front-end and even back-end teams). Under this opportunity, I tried to use Docker to wrap our system.

If you already know about Docker and are just looking for a solution and want to see dockerfiles, you can jump straight to docker-compose.

In addition, I have prepared a template library, you can use this template library as the base to build your project, also welcome everyone to watch eat a pumpkin to learn, issue, star (click on me).

Just a quick word about Docker

Remember the VMware Workstation in the school computer room? It allows us to run virtual machines on Windows or MAC for many systems, and an important concept in VMware Workstation is the host-virtual machine.

Beginners can start by thinking of Docker as similar to VMware Workstation, but if you need to dig deeper into their differences, remember that they are similar only in that they provide another isolated system to run on top of.

Under normal circumstances, the performance of virtual machines is relatively poor, with high requirements for computer configuration and server configuration, while Docker lowers the threshold and the performance is close to bare machine.

What do we need a virtual machine for? Can’t you just dump all your apps on the server? Where is Docker used?

I’ll start by explaining why Docker is used:

  • Consistent operating environments (between development, development and production, and any other environment)
  • More efficient delivery and deployment processes
  • Easy to expand and migrate (existing system can be directly mirror, distributed expansion is easy)
  • Easier maintenance: Your Nginx configuration, Node configuration, and Mongo configuration are all live and managed uniformly
  • Higher performance, faster speed, and higher system utilization than other VIRTUAL machine products

On the opposite side of these advantages, especially the issue of deployment and environment unification, it is enough to raise the engineering cost of non-Docker systems.

Install the Docker

If you are using MacOS or Windows, download Docker Desktop directly. For slow download, go to DaoCloud. For Linux, the following steps are required. If your server does not have yum, you need to install yum first. Clean up the Docker just in case

sudo yum remove docker \
	docker-client \
        docker-client-latest \
        docker-common \
        docker-latest \
        docker-latest-logrotate \
        docker-logrotate \
        docker-selinux \
        docker-engine-selinux \
        docker-engine
Copy the code

Install dependencies

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code

Set the yum source (any other, I’m using Ali here)

sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Copy the code

Update the cache

sudo yum makecache fast
Copy the code

Install Docker-CE (CE stands for community edition, free to use, Docker also has EE edition)

sudo yum -y install docker-ce
Copy the code

Start the Docker

sudo systemctl start docker
Copy the code

The test command

docker -v
Copy the code

The installation is complete at this point. Let’s start installing Docker-compose

Docker-Compose

Traditional Docker, a container needs a Dockerfile to describe, if a project is relatively large and uses more technology, there will be many containers, if you need to execute Dockerfile one by one, even start it one by one, development will be tired, operation and maintenance will be tired. Docker-compose solves this problem by providing a description file for each project and batching all containers in the project.

Install the docker – compose

The curl -l https://get.daocloud.io/docker/compose/releases/download/1.22.0/docker-compose- ` ` uname - s - ` uname -m ` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-composeCopy the code

validation

$docker-compose -v > docker-compose version 1.24.1, build 4667896bCopy the code

The project architecture

First of all, let’s take a look at how our project directory is arranged. It should be noted that the location of project storage has an impact on the configuration of Docker. Our Node framework uses Ali’s egg.js

. ├ ─ ─ docker - compose. Yml # # docker - compose configuration entry ├ ─ ─ logs location # # log │ └ ─ ─ nginx │ ├ ─ ─ access. Log │ └ ─ ─ the error. The log ├ ─ ─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ nginx ## Nginx ## ├── ├─ ├─ ├─ │ ├─ ├─ ├─ ├─ ├─ │ ├─ Dockerfile ## ├─ │ ├─ ├─ ├─ ├─ ├─ ├─ ├─ ## Dockerfile ## ├─ ├─ ├─ Nginx. ├─ ├─ ├─ Nginx The database │ │ ├ ─ ─ init. Js │ │ └ ─ ─ schemas │ ├ ─ ─ the extend │ │ └ ─ ─ application. Js │ ├ ─ ─ middleware │ │ ├ ─ ─ gzip. Js │ │ └ ─ ─ Jwt_error_handler. Js │ ├ ─ ─ public │ └ ─ ─ the router. The js ├ ─ ─ app. Js ├ ─ ─ appveyor. Yml ├ ─ ─ the config │ ├ ─ ─ config. The default. The js │ └ ─ ─ The plugin. Js ├ ─ ─ config. Js ├ ─ ─ jsconfig. Json ├ ─ ─ logs ├ ─ ─ node_modules ├ ─ ─ package. Json ├ ─ ─ the README. Md ├ ─ ─ the test └ ─ ─ typingsCopy the code

Then we will talk about what Dockerfile is, which is configured in the virtual machine image underlying system, what port is used, what command is run, and then Docker will generate an image package according to these commands.

The overall architecture is fairly clear. Here we are going to create a separate container for each application, node for Node, nginx for Nginx, and Mongo for Mongo. Do it.

The nginx Dockerfile

Nginx :alpine # nginx:alpine # nginx /etc/nginx/ # apk RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/'/etc/apk / # repositories to install nginx RUN apk update \ && apk upgrade \ && apk Add --no-cache openssl \ &&apk add --no-cache bash # start nginx CMD nginx # EXPOSE port 80 and port 443 to the container's external EXPOSE 80 443Copy the code

Node Dockerfile

# here I use the mirror is the stable version of the node FROM the node: 10.16.3 - alpine # file COPY program to build WORKDIR/app/Microservice. COPY/Microservice/package. Json. / The RUN NPM install - registry=https://registry.npm.taobao.org # COPY the project file COPY. / Microservice / *. CMD / # start service [" NPM ","run","dev"] # EXPOSE the 7001 port to the hostCopy the code

Note that we are running NPM run dev here, because docker processes require your program to run in the foreground. If NPM run start is used, nothing in the container continues to occupy the output, causing the container to assume that the program has finished executing. Json, in case we need to create a separate volume mapping for node_modules. Why do WE need a separate volume for node_modules

The mongo Dockerfile

# use mongo latest image FROM mongo: latest # mongo configuration file COPY of the host machine to container COPY mongo. Conf/usr/local/etc/mongo/mongo. Conf # mapping external DB Open mongo CMD [" Mongod "] # EXPOSE 27017 to the hostCopy the code

Write the Node/Nginx/Mongo configuration file in docker-comemage. yml

Version: "3" networks: # custom network my-network: # network name driver: bridge # Custom volume node_modules: # mongo_data: services: # mongo_data: services: ## nginx ################# nginx: # nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx / # mount folder, configuration we can write in the host, And then into the mount -. / nginx/conf. D: / etc/nginx/conf., d -. / nginx/cert: / etc/nginx/cret - / logs/nginx: / var/log/nginx restart: Depends_on: # Define the order in which the container is started - nodejs Networks: # use we defined above network - my - network # # # the node # # # # # # # # # # # # # # nodejs: build: context: Dockerfile ports: -127.0.0.1:7001:7001 -. / node/Microservice: / app/Microservice # project file mapping - node_modules: / app/Microservice node_modules/node_modules # separate treatment restart: always depends_on: - mongo networks: - my-network ### mongoDB ######################## mongo: build: context: /mongo ports: -127.0.0.1:27017:27017 Volumes: -mongo_data :/data/db always networks: - my-networkCopy the code

There are a few points to note:

1. Internal port firewall

If you add 127.0.0.1 to the port, docker will block external access to the port and only allow internal access. For example, if you add 127.0.0.1 to the port, docker will only allow internal access to the port. Node we also use nginx to do the proxy so you do not need direct access, you can decide whether to open according to your own needs.

node_modules

Node_modules for nodeJS needs to be handled separately. This is for our local development. When we develop locally, the container is up, we write business code, we don’t need to build the container, docker will automatically map to it. Node_modules is generated every time we run node’s NPM run install. This folder is only generated in the container, not in the host, and changes to the host business code need to be automatically synchronized to the container. Because we don’t have node_modules on the host, the synchronization will also cause the container to lose node_modules, and the code will not work properly. When node_modules is removed as a standalone volume, we need to rebuild the image, except for updating the dependencies. Normal changes to the application code only need to be saved.

run

OK, now that your environment is basically ready, execute

docker-compose up -d
Copy the code

See if you’re running normally

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4bedfab2a306 front-end-microservice_nginx "/bin/sh -c nginx" 18 Seconds Up 15 seconds 0.0.0.0:80->80/ TCP, 0.0.0.0:443->443/ TCP front-end-microservice_nginx_1 d1d539672df5 front-end-microservice_nodejs "docker-entrypoint.s..." 20 seconds ago Up 18 seconds 127.0.0.1:7001->7001/ TCP front-end- Microservice_nodejs_1 8f1b1401a4dc Front - end - microservice_mongo docker - entrypoint. "s..." 24 seconds ago Up 20 seconds 127.0.0.1:27017->27017/ TCP front-end- Microservice_mongo_1Copy the code

Finished, your project should be running, next send Buddha to west, we match mongo to see the container into Mongo

docker exec -it front-end-microservice_mongo_1 /bin/sh
Copy the code

Enter the container and open the Mongo shell

$roles: [$roles: $roles: $roles: $roles: $roles: $roles: $roles: $roles: $roles: "UserAdminAnyDatabase ", db: "admin"}]}) > use test // create test database > db.createUser(// create test database account {user: "Roles ", PWD: "test", roles: [{role: "readWrite", db: "test"}Copy the code

Ok, then open yoursmongoThe connect tool, I’m using heremongoThe officialMongoDB Compass Link successfulThen access it locally with Postman127.0.01:7001Any one of your interfacesThe configuration is complete.

The deployment of online

Thanks to Docker, it is very easy for us to deploy Docker and docker-compose. We only need to install Docker and docker-compose on the server. (In fact, I believe that you do not need to install these. “And run

docker-compose up -d
Copy the code

The other things you need to change are your configuration, such as the domain name and port of the nodeJS database, and the password of the database as above. As a side note, since we are using docker-compose to register the variable name of mongo, we can use the name mongo directly as the host name in nodejs, such as my configuration file in Node.

DB_USER=test
DB_PASSWD=#test
DB_HOST=mongo
DB_PORT=27017
DB_NAME=test
Copy the code

This will automatically link to the local Mongo attached to Node.

In the Nginx section, you only need to modify the nginx.conf file in the Nginx file to define your own Nginx configuration. Note that the path you write in this file should be the container path, for example, we reference SSL file, SSL file on the host is the full path: / data/front – end – microservice/nginx/cert/XXX. CRT, after Docker copy, our container position is: /etc/nginx/cret/xxx. CRT /etc/nginx/cret/xxx. CRT /etc/nginx/cret/xxx. CRT /etc/nginx/cret/xxx. CRT /etc/nginx/cret/xxx. CRT

 nginx:
    build:
      context: ./nginx
    ports:
      -  80:80
      -  443:443
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d
      - ./nginx/cert:/etc/nginx/cret
      - ./logs/nginx:/var/log/nginx
    restart: always
    depends_on:
      - nodejs
    networks:
      - my-network
Copy the code

Docker-compose restart nginx: docker-compose restart nginx

Above, basically complete the whole development-production process, all the code can be found in the template library: egg-docker-template.

If there is a mistake, welcome to correct, and also hope that more architects can communicate with the front-end architecture, engineering, welcome to add my wechat (T1556207795) to clap, talk about the front-end.

References:

  • Using Docker for Node.js in Development and Production
  • How To Build a Node.js Application with Docker