In the last article, we introduced the concepts of containers and images in detail. This article introduces the network connection of some containers and how we build images using Dockerfile, and how we use Docker for container deployment in front-end projects.

This article was first published in the public number [front-end one read], more exciting content please pay attention to the latest news of the public number.

network

Many applications in our deployed container need to be accessed by external network ports, such as mysql port 3306, mongodb port 27017, Redis port 6379, etc. Besides external access, different containers may also need to communicate with each other. For example, our Web application container needs to connect to mysql or mongodb container, which involves network communication.

Port mapping

To allow external access to the application, the container can specify the port to be exposed with either the -p or -p arguments:

$ docker run -d -P nginx
9226326c42067d282f80dbc18a8a36bf54335b61a84b191a29a5f59d25c9fbc3
Copy the code

-p binds a random port to the host, which is mapped to an internal port of the container. If we look at the container we just created, we can see that random port 49154 is mapped to port 80 inside the container:

$ docker ps -l CONTAINER ID IMAGE CREATED STATUS PORTS 9226326c4206 nginx About a minute ago Up About a minute 0.0.0.0:49154 - > 80 / TCP, : : : 49154 - > 80 / TCPCopy the code

Using logs we can see nginx access logs:

$docker logs 9226326C4206 10.197.92.41 -- [16/Mar/ 2020:01:40:32 +0000]"The GET/HTTP / 1.1" 304 0 "-" "Mozilla / 5.0 (Windows NT 10.0; Win64; X64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36" "-"
Copy the code

Docker Port allows us to quickly see how container ports are bound:

$ docker port 9226326c4206
80/tcp -> 0.0.0.0:49154
Copy the code

You can specify a port to map using the -p argument:

$ docker run -d -p 3000:80 nginx
Copy the code

Also can use IP: hostPort: containerPort format specified mapping of a particular IP:

$docker run -d -p 127.0.0.1:300:80 nginxCopy the code

If the hostPort parameter is omitted, the local host automatically allocates a port. This parameter is similar to the -p parameter.

$docker run -d -p 127.0.0.1::80 nginxCopy the code

You can also use UDP to specify mapping to udp ports:

$ docker run -d -p 3000:80/udp nginx
Copy the code

Sometimes we want to map multiple ports of the container, we can use multiple -p arguments:

$ docker run -d 
            -p 8000:8000 \
            -p 8010:8010\
            nginx
Copy the code

Or map a list of ports in a range:

$ docker run -d -p 8080-8090:8080-8090 nginx
Copy the code

Docker network mode

We want to interconnect multiple containers. In order to avoid interference between different containers, we can set up different Lans for multiple containers, so that the networks within the local area network can communicate with each other.

To understand the network mode of Docker, we first look at what networks docker has; When docker is installed, it will automatically create three networks: None, host, and brdge.

$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
c64d7d519c22   bridge    bridge    local
6306a0b1d150   host      host      local
d058571d4197   none      null      local
Copy the code

Let’s take a look at each of these networks; None turns off the network functionality of the container. We use –network=none to specify which network to use:

$ docker run -itd --name=busybox-none --network=none busybox
49f88dd75ae774bea817b27c647506eda5ad581403bfbad0877e8333376ae3b0

docker exec49f88dd75ae7 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1000 Link /ipip 0.0.0.0 BRD 0.0.0.0 3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop qlen 1000 link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00Copy the code

Busybox is a software package that compresses more than 300 common Linux commands and tools. It is called the Swiss Army Knife of Linux tools. We mainly use its IP command to view the network details of the container.

We can see that the container has no other network card information except lo local loopback network card. Not only can not receive information, also can not send information, we use the ping command to test the network condition:

$ docker exec 49f88dd75ae7 ping xieyufei.com
ping: bad address 'xieyufei.com'
Copy the code

This network is like an isolated island, so we can’t help thinking, what’s the use of such an “autistic” network?

Closed means isolated, and some applications that require high security and do not require networking can use none networks. For example, a container whose sole purpose is to generate random passwords can be placed on a None network to prevent password theft.

When docker is installed, a virtual bridge named docker0 will be installed on the host. If –network is not specified, the container created will be mounted to docker0 by default. We can view all the Bridges under the host by command:

$BRCTL show bridge name bridge ID STP enabled interfaces docker0 8000.02426b8674A4 noCopy the code

The bridge here can be regarded as a router, which connects two similar networks and manages the data in the network. At the same time, it also isolates external access to the bridge. Containers under the same bridge can communicate with each other. Again, let’s use BusyBox to check the network status of the container

$ docker run -itd --name=busybox-bridge --network=bridge busybox
f45e26e5bb6f94f50061f22937abb132fb9de968fdd59fe7ad524bd81eb2f1b0

$ docker exec f45e26e5bb6f ip a
181: eth0@if182: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:06 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.6/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
Copy the code

We can see that there is an eth0 network adapter in the container. Its IP address is 172.17.0.6.

Finally, the host network mode disables Docker network isolation, and containers share the host network. We still check the container network through BusyBox:

$ docker run -itd --name=busybox-host --network=host busybox
2d1f6d7a01f1afe1e725cf53423de1d79d261a3b775f6f97f9e2a62de8f6bb74

$ docker exec 2d1f6d7a01f1 ip a
2: enp4s0f2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 74:d0:2b:ec:96:8a brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.100/24 brd 192.168.0.255 scope global dynamic noprefixroute enp4s0f2
       valid_lft 37533sec preferred_lft 37533sec
Copy the code

We find that the IP address of the container is 192.168.0.100, which is the same as the IP address of my host. The host mode is similar to the Vmware bridge mode. The container does not have an independent IP address and port, but uses the IP address and port of the host.

Note that in host mode, there is no need to add the -p parameter, because it uses the IP address and port of the host.

WARNING: Published ports are discarded when using host network mode
Copy the code

Host mode shares the network with the host, so its network model is the simplest and lowest latency mode. The container process communicates directly with the host network interface, which is consistent with the performance of the physical machine. However, the host mode is not conducive to customized network configuration and management. All containers use the same IP address, which is not conducive to the utilization of host resources. Therefore, the host mode can be used by some containers that have high requirements on network performance.

The container of interconnected

We use container interconnection to test how two containers are connected under the same bridge. First we define a custom bridge:

$ docker network create -d bridge my-net
Copy the code

If you are not satisfied with the bridge, you can remove it by using the rm command:

$ docker network rm my-net
Copy the code

Let’s create two new containers and connect them to the my-Net network:

$ docker run -itd --name busybox1 --network my-net busybox
$ docker run -itd --name busybox2 --network my-net busybox
Copy the code

We ping two containers to each other and find that they can ping each other:

$ docker execBusybox1 ping busybox2 ping busybox2 (172.23.0.3): 56 data bytes 64 bytes from 172.23.0.3: Seq =0 TTL =64 time=0.139 ms 64 bytes from 172.23.0.3: seq=1 TTL =64 time=0.215 ms $dockerexecBusybox2 ping busybox1 ping busybox1 (172.23.0.2): 56 data bytes 64 bytes from 172.23.0.2: Seq =0 TTL =64 time=0.090 ms 64 bytes from 172.23.0.2: seq=1 TTL =64 time=0.224 msCopy the code

Dockerfile

We briefly mentioned the FROM and RUN commands of Dockerfile in the previous article, but Dockerfile also provides other powerful commands, which we’ll cover in depth. First of all, we know that Dockerfile is a text file used to build the image. The text content contains the instructions and instructions required to build the image. In the docker build command we use the -f argument to point to the Dockerfile anywhere in the file:

$ docker build -f /path/to/Dockerfile
Copy the code

FROM the instructions

The FROM directive specifies a base image, which determines what the image is and what environment the Dockerfile builds. Most dockerfiles start with the FROM instruction; Its syntax is as follows:

FROM <image> [AS <name>]
FROM <image>:<tag> [AS <name>]
Copy the code

Dockerfile must start with the FROM directive, but it does support a variable defined by the ARG directive before the FROM directive:

ARG NG_VERSION = 1.19.3 FROM nginx:${NG_VERSION}
CMD /bin/bash
Copy the code

Multistage construction

When we build images, we usually need to build multi-stage images. For example, when we build images for vUE projects, we need to package dist files in the compilation stage and use dist files as static resources in the production run stage. If we don’t use multi-stage builds, we usually need two Dockerfiles to build two images, so having one image is definitely wasteful.

Starting FROM 17.05, Docker supports multi-stage construction, that is, we can use multiple FROM instructions in Dockerfile, each FROM instruction can use a different base image, and each instruction will start a new stage of construction; In a multi-phase build, we can copy resources from one phase to another, leaving only what we need in the final image.

FROM node

#... Some operations

FROM nginx

#... Some operations

COPY --from=0 . .
Copy the code

The second FROM instruction starts a new build phase. COPY — FROM =0 means to COPY files FROM the previous phase (the first phase); By default, the build phase is not named and can be referenced with integer numbers starting from 0; We can Name the build phase by adding as

to the FROM directive.

FROM node as compile

FROM nginx as serve

COPY --from=compile . .
Copy the code

In a later example, we will demonstrate how to optimize our build process using a multi-phase build.

Base image selection

Since the base image determines the size of the image product to build, it is important to choose an appropriate base image. If you go to hub.docker.com and check the node tag, you’ll find that the version number is accompanied by some obscure words, such as alpine and slime. What does the version number mean? Let’s take a quick look at this.

The difference between Docker images lies in the underlying operating system

First of all, if you bring nothing with you, the default is “latest”, which is the full mirrored version, and you can definitely select it if you are still young and don’t know anything about other versions.

The second is the Slim version, which represents the minimum installation package and contains only the specific toolset that needs to run the specified container. By eliminating lesser-used tools, the mirror is smaller. If our server has space constraints and does not need the full version, we can use this image. When using this version, however, do thorough testing.

Then there’s the Alpine version we often see. The Alipine image is based on the Alpine Linux Project, a community-developed lightweight Linux distribution for secure applications. The advantage is that the Linux operating system is very lightweight, so the image built is very lightweight; Its disadvantages are obvious: it does not include any packages that might be useful, and the gliBC package is a castrated version. So if we use this version, we need to test it thoroughly.

We can also find differences in the sizes of these three versions:

$ docker image ls node
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
node         slim      ffedf4f28439   5 days ago     241MB
node         alpine    d2b383edbff9   3 months ago   170MB
node         latest    a283f62cb84b   3 months ago   993MB
Copy the code

Then there are some distributions of Debian, a free and unrivalled stable operating system; Images with the following labels correspond to Debian distribution numbers.

  • Debian 11 bullseye
  • Arcade: Debian 10
  • Stretch: Debian 9
  • Jessie: Debian 8

The RUN command

The RUN command is used to RUN commands in an image container in either of the following ways:

# shell perform
RUN <command>
# the exec format
RUN ["Executable file".Parameters of "1"."Parameter 2"]
Copy the code

Curl curl curl curl curl curl curl curl curl curl curl

FROM Ubuntu :18.04 RUN apt-get update RUN apt-get install -y curlCopy the code

We know that the instructions for Dockerfile are built in layers, each with a cache, assuming we add a package wget next time:

Ubuntu :18.04 RUN apt-get update RUN apt-get install -y curl wgetCopy the code

If we build again next time, apt-get update will not execute, using the cached image; Install may install outdated curl and wGET versions because the update command was not executed.

Therefore, we usually put update and install in the same directive to ensure that our Dockerfiles are installed with the latest version of the package each time; At the same time, it can also reduce the number of mirror layers and reduce the volume of packets:

RUN apt-get update && apt-get install -y curl wget
Copy the code

WORKDIR Working directory

The WORKDIR directive can be used to specify the working directory, and then the current directory of each layer is changed to the specified working directory. If the directory does not exist, WORKDIR automatically creates the directory.

Many children write Dockerfile as a Shell script, which may result in the following error:

FROM node:10.15.3-alpine

RUN mkdir /app && cd /app

RUN echo "hello" > world.txt
Copy the code

Echo redirects the string hell to world.txt; /app/world.txt; /app/world.txt In the Shell script, two consecutive commands are executed in the same process execution environment. Therefore, the execution of the previous command affects the execution of the later command. Because dockerfiles are built hierarchically, the environments in which the two RUN commands are executed are completely different containers.

Therefore, if we need to change the location of the working directory for each layer, we can use the WORKDIR directive. It is recommended to use the absolute path in the WORKDIR directive:

FROM node:10.15.3-alpine

WORKDIR /app

RUN echo "hello" > world.txt
Copy the code

The resulting world.txt is in the /app directory.

COPY COPY

The COPY command is used to COPY files from the build context directory to the target path in the image. It is similar to the Linux cp command. Its syntax format is as follows:

COPY [--chown=<user>:<group>] < source path >... COPY [--chown=<user>:<group>] [< source path 1>.< target path >]
Copy the code

The copied files can be one file, multiple files, or wildcard matching files:

COPY package.json /app

COPY package.json app.js /app

COPY src/*.js /app
Copy the code

Note that the COPY command can only COPY files in a folder, but not the folder itself, which is different from the cp command in Linux. For example, copy the SRC folder as follows:

COPY src /app
Copy the code

After the operation we found under the SRC folder files are copied to/app directory, not copy the SRC folder itself, so we need to write like this:

COPY src /app/src
Copy the code

CMD command

The CMD command is used to execute the software contained in the target image and can specify parameters. It also has two syntactic formats:

CMD < command > CMD ["Executable file".Parameters of "1"."Parameter 2". ]Copy the code

We found that both CMD and RUN can be used to execute commands. They are very similar. What is the difference between them? First of all, we find that RUN is used to execute the commands to be executed during the docker build image building process, such as creating folder mkdir, installation program apt-get, etc.

The CMD command runs during docker run instead of Docker build, that is, when the container is started. Its primary purpose is to specify the default program to be run for the container to be started. When the program is finished, the container will be finished.

A container can only be created once during run, so there can only be one CMD instruction in a Dockerf. For example, our container runs the Node program, and finally needs to start the program:

CMD ["node"."app.js"]
# or
CMD npm run start
Copy the code

ENTRYPOINT entry point

The ENTRYPOINT directive, like CMD, specifies the container launcher and parameters; A Dockerfile can also have only one ENTRYPOINT directive; After the ENTRYPOINT command is specified, the meaning of CMD command is changed. Instead of running the command directly, the content of CMD is passed to the ENTRYPOINT command as a parameter, equivalent to:

<ENTRYPOINT> "<CMD>"
Copy the code

So what are the benefits? Docker curl curl curl curl curl curl curl curl curl curl

The FROM ubuntu: 18.04# Switch to Ubuntu sourceRUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list RUN apt-get clean RUN apt-get update \ &&  apt-get install -y curl \ && rm -rf /var/lib/apt/lists/* CMD ["curl"."-s"."http://myip.ipip.net" ]
Copy the code

Docker build -t myip. Docker build -t myip. When we want to query the IP address, we just need to execute the following command:

$docker run --rm myip Current IP: 218.4.251.37 From: Jiangsu Suzhou TelecomCopy the code

This allows us to use the image as a command, but if we want to display HTTP headers at the same time, we need to add the -i argument:

$ docker run --rm myip -i
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-i": executable file not found in $PATH: unknown.
Copy the code

But the -i argument is not passed to CMD, but to docker run. Docker run does not have -t, so an error is reported. If we want to add -i, we need to re-type the command completely; The ENTRYPOINT directive solves this problem:

FROM ubuntu:18.04 RUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources. List RUN apt-get clean RUN apt-get update \ && apt-get install -y curl \ && rm -rf /var/lib/apt/lists/* ENTRYPOINT ["curl"."-s"."http://myip.ipip.net" ]
Copy the code

Let’s try the -i argument again:

$docker run --rm myip -i HTTP/1.1 200 OK Date: Fri, 01 Apr 2022 07:24:21 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 67 Connection: keep-alive X-Via-JSL: fdc330b,- Set-Cookie: __jsluid_h=9f0775bbcb4cc97b161093b4c66dd766; max-age=31536000; path=/; HttpOnly X-cache: bypass Current IP address: 218.4.251.37 From: Jiangsu Suzhou Telecom, ChinaCopy the code

You can see that the HTTP header information is also displayed.

VOLUME data VOLUME

The VOLUME directive is used to expose any database storage file, configuration file, or container created files and directories; The syntax is as follows:

VOLUME ["< 1 > path"."< 2 > path". ] VOLUME < path >Copy the code

We can specify some directories to be mounted as anonymous volumes in advance. In this way, if the user does not specify mount at runtime, the application can run normally without writing a large amount of data to the container storage layer. For example:

VOLUME /data
Copy the code

The /data directory is automatically mounted as an anonymous volume while the container is running, and any information written to /data is not recorded in the container storage layer, thus ensuring that the container storage layer is stateless.

$ docker run -d -v mydata:/data xxxx
Copy the code

When we run the container, the local directory can overwrite the mounted anonymous volume; Note that there are some differences between the directory mounted on Windows and the directory mounted on Linux (and Macos). In Linux, because it is a tree directory structure, we can directly find the directory when mounting. If the directory does not exist, Docker will automatically create it for you:

$ docker run -d -v /home/root/docker-data:/data xxxx
Copy the code

In Windows, the corresponding drive letter directory is required:

$ docker run -d -v d:/docker-data:/data xxxx
Copy the code

EXPOSE port

The EXPOSE directive is a declaration of the port that the container runs on. This is just a declaration that the application will not open the port when the container runs. The syntax is as follows:

EXPOSE < Port 1> [< Port 2>...]Copy the code

Writing such a declaration in a Dockerfile has two benefits. One is to help the mirror user understand the daemon port of the mirror service to facilitate the configuration of the mapping. Another benefit is that when random port mapping is used at runtime, namely when Docker run -p, it automatically randomly maps EXPOSE ports.

ENV command

The ENV directive is used to set environment variables. There are two syntax for setting variables:

ENV <key> <value>
ENV <key1>=<value1> <key2>=<value2>...
Copy the code

Environment variables can be used directly by subsequent directives, such as the RUN directive, or by run-time applications:

ENV NODE_VERSION 7.2.0

RUN curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
  && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc"
Copy the code

The environment variable NODE_VERSION is defined, which can be used multiple times in the RUN directive. So if we want to upgrade the Node version later on, all we need to do is update 7.2.0

ARG instruction

The ARG directive, like ENV, sets environment variables, except that ARG sets environment variables for the build environment, which will not be present when the container runs.

Deploying front-end projects

Vue project

When we develop a front-end project locally, we must deploy it on the server for others to access the page. Generally, we let operation and maintenance configure Nginx on the server to package our project as a static resource. In Nginx in depth, we explained how to do static servers using Nginx, where we configure Nginx files ourselves and deploy our projects with Docker.

First create nginx configuration file default.conf in our project directory:

server { listen 80; server_name _; location / { root /usr/share/nginx/html; index index.html inde.htm; try_files $uri $uri/ /index.html =404; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; }}Copy the code

The configuration file defines the directory for our packaged static resources as /usr/share/nginx/html, so we need to copy the dist folder to that directory; Try_files are also used to match vue’s history routing pattern.

Create another Dockerfile in the project directory and write the following:

FROM nginx:latest

COPY default.conf /etc/nginx/conf.d/

COPY dist/ /usr/share/nginx/html/

EXPOSE 80
Copy the code

We can build the image after the project is packaged and the DIST file is generated:

$ docker build -t vue-proj .
Copy the code

Next start our server based on this image:

$ docker run -itd -p 8080:80 vue-proj
Copy the code

So our application gets up and goes to the http://localhost:8080 port to see the site we deployed.

Express project

We also have node projects, such as Expree, EggJS, or Nuxt, that can be deployed using Docker, but we need to copy all the project files to the image.

First we simulate a simple express entry file app.js

const express = require("express");

const app = express();
const PORT = 8080;

app.get("/".(req, res) = > {
  res.send("hello express");
});

app.listen(PORT, () = > {
  console.log(`listen on port:${PORT}`);
});
Copy the code

Since we need to copy the entire project’s files below, we can ignore some files with the.dockerignore file:

.git
node_modules
Copy the code

Then write our Dockerfile:

FROM the node: 10.15.3 - alpine WORKDIR/app COPY package *. Json. / RUN NPM install - registry at https://registry.npm.taobao.org COPY . . EXPOSE 8080 CMD npm run startCopy the code

We saw that the process above was to copy the package*. Json file first, install the dependencies and then copy the entire project, so why? Smart kids have probably guessed that 叒 yi is mostly related to docker’s layered construction and cache.

Yes, if we copy package*. Json with the code, if we only change the code without adding dependencies, Docker will still install dependencies; But if we separate it out, we can improve the cache hit ratio. The build image and startup container will not be discussed further.

Vue project multi-stage construction

Above, we manually packaged and generated dist files in vUE project, and then deployed them through Docker. We also mentioned multi-stage builds in the FROM directive, so let’s see how we can optimize using multi-stage builds.

We still have the nginx configuration file default.conf in the project, but this time instead of manually generating the dist file, we put the build process into a Dockerfile:

FROM node:12 as compile

WORKDIR /app

COPY package.json ./

RUN npm i --registry=https://registry.npm.taobao.org

COPY . .

RUN npm run build

FROM nginx:latest as serve

COPY default.conf /etc/nginx/conf.d/

COPY --from=compile /app/dist /usr/share/nginx/html/

EXPOSE 80
Copy the code

We see that in the first compile phase above, we generated the dist file with the NPM run build command; In the second stage, copy the dist file to the nginx folder. The final product of the build is still the nginx server for the last FROM instruction.

Multi-stage builds require a lot of commands, and many children wonder if the final image will be large; We use the ls command to view the built image:

$ docker images
REPOSITORY            TAG                  IMAGE ID       CREATED          SIZE
multi-compile         latest               a37e4d71562b   11 seconds ago   157MB
Copy the code

You can see that it is about the same size as building with Nginx alone.

If you think it’s good, check out my Nuggets page. Please visit Xie xiaofei’s blog for more articles