How to play the docker

The original link

background

Recently, WE are working on a project to report and analyze the performance of the front end of the react family. The front end is packaged and deployed by the company, which has a special release system.

There is no specification or common process on the back end of the front end team, so they want to choose their own technology from 0 to 1, decide to use egg + mongodb, and then maybe add nginx + Redis + Kafka configuration.

  • The question is: How do you simplify the deployment and configuration process?
  • A:docker

The target

The overall goal is nothing more than simple, fast and safe.

  • simple
  1. One configuration, different environments can execute, this is alsodockerThe advantage of.
  2. The deployment is simple, and may consist of a few or even one command line, facilitating accessCI.
  3. Local development is convenient.
  • fast
  1. Develop and compile hot overloads faster.
  2. Image packages and deployment packages should be small so that uploading, downloading and deployment can be fast.
  3. Fast rollback.
  • security
  1. Source code without leakage risk.
  2. The MongoDB security verification function is enabled.

action

PS: Google + Github Issue + Stack Overflow + Docker official document + English blog Having difficulty in English really affects efficiency.

Next, talk about the problems encountered in practice.

Without a docker

Steps:

  1. Download node and mongodb.
  2. Configure node and mongodb.
  3. Start egg development.

When changing computers or working with a partner, repeat what you do. Different operating systems and node or DB versions may cause the system to fail.

You’ve heard the phrase: It’s fine on my computer.

A preliminary docker

The team can not restrict the installation and version control of many software, so it is easy to install a certain range of Docker.

For example, start the mongodb service

Docker run -p 27017:27017 -v <LocalDirectoryPath>:/data/db --name docker_mongodb -d mongo:4.2.6Copy the code

Here we start a mongo latest stable version of the Docker container. A quick note:

  • runIf the image is running, no local image will be automatically pulled.
  • -pPort mapping, local 27017 mapping container 27017 port, can access the docker Mongo service by accessing the local port.
  • -vlocal<LocalDirectoryPath>Mapping container directory/data/dbTo persist the database, otherwise the container will lose deleted data.
  • --nameGive the container a name so that anonymous containers can pass throughdocker container pruneDelete it.
  • -dThe background
  • Mongo: 4.2.6Docker Hub official image: version

Then the local boot will have the same effect as no Docker.

An egg mirror,

Write a Dockerfile file

# Based on the base imageFROM the node: 12.16.3
# Step pit 1: Pay attention to the directory before use, ensure that it exists
RUN mkdir -p /usr/src/egg

Do not use CD, need to understand the concept of docker layered build, need to change context PWD, use WORKDIR
WORKDIR /usr/src/egg

Copy the contents of the Dockerfile file to /usr/src/egg
COPY . .

Install the NPM package
RUN npm install
 The actual mapping uses -p XXX :7001 EXPOSE 7001  The default command line to execute after starting the container CMD [ "npm"."run"."start" ] Copy the code

Write the.dockerignore file

node_modules
npm-debug.log
.idea
Copy the code

Ignore node_modules:

  1. Build sends the contents of the directory to the Docker process, reducing I/O.
  2. Local operating system and version installed NPM package may not be suitable for the Docker environment to avoid conflicts.
  • build
docker build -t node:egg .
Copy the code
  • To view the image
docker images
Copy the code
REPOSITORY  TAG  IMAGE ID      CREATED         SIZE
node        egg  ae65b8012120  28 seconds ago  1.12GB
Copy the code
  • run
docker run -p 7001:7001 -d node:egg
Copy the code
  • View the running container
docker ps # to check the CONTAINER operation, get the CONTAINER ID, -a can view all of the CONTAINER to stop
Copy the code
  • View container log
docker logs b0d0c3df5eed
Copy the code
  • Into the container
docker exec -it b0d0c3df5eed bash
du -a -d 1 -h Check the container directory file size
Copy the code

* * * * * * * * * * * * * * * * * * * To understand the concept of foreground and background running processes, shell scripts in Docker must be run in foreground mode.

Optimize mirror size

As seen above, the source code may be a few hundred K, but the image package is over 1G. Let’s see what optimization we can do.

  • Basic Mirror start

You probably don’t need docker Node to provide full tools like bash, git, etc. If you only need a basic Node environment to run, you can use alpine mirroring.

- the FROM node: 12.16.3
+ the FROM node: 12.16.3 - alpine
Copy the code
  • NPM package optimization
- RUN npm install
+ # Development dependencies that do not run should belong to devDependencies
+ RUN npm install --production
Copy the code
  • packaging
docker build -t node:simplfy .
Copy the code
  • The effect
REPOSITORY  TAG      IMAGE ID      CREATED         SIZE
node        simplfy  8ccafec91d90  28 seconds ago  132MB
Copy the code

Image packages were reduced from 1.12G to 132MB and node_modules from 214MB to 44.5M.

Stomp 3: Alpine mirror containers do not support bash.

You can issue this if you need bash, git

RUN apk update && apk upgrade && \
    apk add --no-cache bash git openssh
Copy the code

Or to avoid bloating your mirror issue, you can use sh instead

docker exec -it container_id sh
Copy the code

Do you really need an Egg mirror image?

For docker-free native development, you can use egg-Mongoose to connect to the database

` mongo: / / 127.0.0.1 / your - database `
Copy the code

With Docker, the 127.0.0.1 or localhost of the container is incompatible with your local environment.

There are two ways to connect to Docker Mongo:

  1. Use an accessible real IP address, for example:Mongo: / / 192.1.2.3 / your database. -
  2. Communication between Docker networks containers.

For example, set the real IP in Dockerfile

ENV docker_db=192.1.2.3Copy the code

The connection url

`mongodb://${process.env.docker_db}/your-database`
Copy the code

Having to constantly change network IP addresses for every development is not development-friendly.

Think about:

  1. How to automatically distinguish between local and Docker-based environments?
  2. How to isolate local from Docker for examplenode_modulesConflict?
  3. How to not only enjoy the development efficiency brought by the local environment and tools, but also quickly access docker to see the deployment effect.

Image packages also face a storage problem, accidentally sent to the open source Docker Hub repository, which can lead to source leakage.

Build your own warehouse?

Most tutorials start with the inevitable or large chapters are Dockerfile build images. Given all the above, can we change our way of thinking and give up constructing image?

docker-compose

Docker-compose is used to orchestrate startup deployments of multiple containers.

Mongo configuration

  • newdocker-compose.ymlFile.
version: "3"

services:
  db:
    image: Mongo: 4.2.6 # Mirror: version
    environment:
      - MONGO_INITDB_ROOT_USERNAME=super Mongo -u godis -p godis@admin --authenticationDatabase admin
      - MONGO_INITDB_ROOT_PASSWORD=xxx # Supertube password, sensitive data can also use 'secrets', don't repeat.
      - MONGO_INITDB_DATABASE=admin The default database in # *.js
    volumes:
      - ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
      - ./mongo-volume:/data/db
    ports:
      - "27017:27017"
    restart: always
Copy the code

Brief description:

  • Version: “3”, not your application configuration version, but the docker supported version, details.

  • MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD environment variables used to enable authorization, docker automatically creates a database supermanagement role.

  • *.js script in /docker-entrypoint-initdb.d/, such as init-mongo.js to initialize the database role.

  • MONGO_INITDB_DATABASE is the default DB object in *.js, pointing to admin.

  • /mongo-volume:/data/db Mapping directory or volume to persist database files.

  • init-mongo.js

// https://stackoverflow.com/questions/42912755/how-to-create-a-db-for-mongodb-container-on-start-up
// Create access roles on user and staff databases.
// db is the MONGO_INITDB_DATABASE specified database
db.getSiblingDB('user')
  .createUser(
    {
      user: 'user'.pwd: 'xx'.roles: [ 'readWrite'.'dbAdmin']});
db.getSiblingDB('staff')
  .createUser(
    {
      user: 'staff'.pwd: 'yy'.roles: [ 'readWrite'.'dbAdmin']});Copy the code
  • Docker-compose password configuration is a bit messy, how to access with the application (docker-compose SettingssecretsFile, node also read files? , a little trouble, the reader has a better plan also hope to give advice.

The node configuration

services:
  .
  server:
    image: Node: 12.16.3 - alpine
    depends_on:
      - db
    volumes:
      - ./:/usr/src/egg
    environment:
      - NODE_ENV=production
      - docker_db=db
    working_dir: /usr/src/egg
    command: /bin/sh -c "npm i && npm run start" # not works: npm i && npm run start and not support bash
    ports:
      - "7001:7001"

volumes:
  nodemodules:
Copy the code

Description:

  • depends_onRepresents a container for dependencies, and Docker waits for dependencies to start first.
  • volumesMap a local directory to the container so that local changes can affect the container.
  • environmentCan be found inprocess.envTo get the.
  • working_dirSet up thepwdAnd not likeDockerfile, will be created automatically if not exist.
  • commandThe command line executed after the container is started.

Pit 4: Command: NPM I && NPM run start not supported &&. Error: Cannot find module ‘/bin/bash’

  • Note: How do Docker nodes communicate with Docker Mongo?
environment:
  .
  - docker_db=db # db is the name used to define mongo in services
Copy the code
`mongodb://${process.env.docker_db}/your-database`
Copy the code

Most tutorials are solved with links, but are officially deprecated and ready to be scrapped. Networks are recommended. Networks are not configured here. This is because docker creates networks named projectname_default by default for communication between docker-compose containers.

  • How to isolate local and Docker node mappingsnode_modules?
services:
  ...
  server:
    image: node:12.16.3-alpine
    depends_on:
      - db
    volumes:
+ - nodemodules:/usr/src/egg/node_modules
      - ./:/usr/src/egg
    environment:
      - NODE_ENV=production
      - docker_db=db
    working_dir: /usr/src/egg
    command: /bin/sh -c "npm i && npm run start" # not works: npm i && npm run start and not support bash
    ports:
      - "7001:7001"

+ volumes:
+ nodemodules:
Copy the code
  • indocker-compose.ymlFiles run files in the directorydocker-compose.
docker-compose up -d
Copy the code
  • Why not just use anonymous volumes?
volumes:
  - :/usr/src/egg/node_modules
  - ./:/usr/src/egg
Copy the code
  • Answer: If need moredocker-composeFiles to differentiate the environment, such as development, there is no need to execute each startup oncenpm i.

Create the docker-comemage.notinstall. yml file

version: "3"

services:
  server:
    environment:
      - NODE_ENV=development # to cover
      - NEW_ENV=add # new
    command: npm run start # to cover
Copy the code
  • The second time, you can execute the following command to reducenpm iBring the consumption. If you use anonymous volumes, thennode_modulesEach container is independent and cannot be shared, resulting in an error.
docker-compose -f docker-compose.yml -f docker-compose.notinstall.yml up -d
Copy the code

For more file usage, see the document Share Compose Configurations between files and projects

Finally, optimize the memory command line with package.json scripts.

The results of

Development deployment issues are over for the time being, the project is still under development, and I will share it after running online for some time. The level is limited, there are mistakes welcome to point out, there are better suggestions welcome to supplement.

reference

  1. A Better Way to Develop Node.js with Docker
  2. Dockerizing a Node.js web app
  3. docker_practice
  4. YAML
  5. Managing MongoDB on docker with docker-compose
  6. mongoosejs
  7. docker mongo
  8. egg-docker-template
  9. Reduce the image size of node.js applications in two simple steps
  10. Cannot pass any ENV variables from docker to Node process.env