preface

In Docker 01 we are familiar with the common Docker command, and successfully containerized the first application. In this article we will implement the containerization of the back-end API server + database.

Familiar with process

Let’s start today’s content:

Project preparation

# If you watched the last tutorial, Git fetch origin network-start git checkout network-start network-start https://github.com/tuture-dev/docker-dream.git cd docker-dreamCopy the code

prompt

I directly used the downloaded project in the last article, but it failed to be successfully containerized later. Later, I found that the downloaded project in the last article was different from this one, so I suggested directly cloning the downloaded project.

Compared with the previous container front-end static page server, there is a more difficult point: server and database are two independent containers, but the server needs to connect and access the database, how to achieve communication between containers?

The Network type

A Network, as the name implies, enables different containers to communicate with each other. First of all, it is necessary to list the five drivers of Docker Network:

  • Bridge: The default driver mode, known as “bridge,” is typically used on a single machine (more specifically, a single Docker daemon).
  • The network can connect multiple Docker daemons, usually used in clusters, which will be highlighted in the Docker Swarm article.
  • Host: directly use the host (that is, the machine running Docker) network, only applicable to Docker 17.06+ cluster service.
  • Macvlan: The MACVLAN network enables each container to be displayed as a physical device by assigning a MAC address to it, suitable for applications (such as embedded systems, Internet of Things, and so on) that want to connect directly to the physical network.
  • None: Disables all networks for this container.

Today we will focus on the default Bridge network driver.

Network to prepare

We still can’t understand it in this way, so we need to try to understand and feel Bridge Network. Let’s try it out with Alpine Linux mirroring.

Bridge networks fall into two categories:

  • Default network (Docker runtime built-in, not recommended for production).
  • Custom network (recommended).

The default network

We will connect two containers alPINE1 and Alpine2 on the default Bridge network. Run the following command to view the existing network:

docker network ls
Copy the code

You should see the following output (note that the ID on your machine is probably different) :

These three default networks correspond to the bridge, Host, and None network types above, respectively. Next we will create two containers named Alpine1 and Alpine2 with the following command:

docker run -dit --name alpine1 alpine
docker run -dit --name alpine2 alpine
Copy the code

-dit is the combination of -d (background mode), -I (interactive mode), and -t (virtual terminal). With this combination, we can keep the container running in the background without exiting (yes, “idling”).

withdocker ps The command confirms that both containers are running in the background:

To view details of the default Bridge network, run the following command:

docker network inspect bridge
Copy the code

Network details should be output in JSON format:

[ { "Name": "bridge", "Id": "4741cab870f168d59a3b5eb1d67251d177dd3af14b8cccc39c130a3f0ec5e3a0", "Created": 2020-10-07T11:18:54.983568882+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Default", "Options", null, "Config" : [{" Subnet configures ":" 172.17.0.0/16 ", "Gateway" : "172.17.0.1}]}," Internal ": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "1af67ce67ec14f57199a6b4e9183330e017014ad33cad0f783b4a3ab9e763bd0": { "Name": "client", "EndpointID": "1b800afa81ba8e7d15c806c349166c5f24619d71710825860f4d4eef706badf8", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "" IPv6Address 172.17.0.2/16", ":" "}, "27783577 c5b621ff8d2dfdc2982bad4635ac093ed261aa4f4189788f8efe0807" : {" Name ": "alpine2", "EndpointID": "8539c75c6cc44dceb143aac1936e215e2f80f6b6a412c6f10b33722c4f9cf74b", "MacAddress": "02:42: ac: 11:00:04", "IPv4Address" : "172.17.0.4/16", "IPv6Address" : "" }, "e7a69f15fa39cdd2434dac9ec9e84f7d7b133f062dfe110cfcde297546830043": { "Name": "alpine1", "EndpointID": "0c0310de6ec103386a9485166b54826efa574aa6b3da7996c16385f1d0bb16c5", "MacAddress": "02:42:ac:11:00:03", "IPv4Address": "" IPv6Address 172.17.0.3/16", ":" "}}, "Options" : {" com.docker.net work. Bridge. Default_bridge ": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "True", "com.docker.net work. Bridge. Host_binding_ipv4" : "0.0.0.0", "com.docker.net work. The bridge. The name" : "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ]Copy the code

In this long string of information we should focus on two fields:

  • IPAM: IP Address Management. The gateway Address is 172.17.0.1. If you want to know the gateway Address, you can refer to the computer network and TCP/IP protocol.
  • Containers: includes all Containers connected on this network, you can see that we just created Alpine1 and Alpine2 with IP addresses 172.17.0.3 and 172.17.0.4 respectively (/16 is the subnet mask, not to be considered for the time being).

Into the AlPINE1 container:

docker attach alpine1
Copy the code

The {% blockquote %} attach command can only enter containers that have been set up for interactive operation (that is, the -i argument was added at startup). {% endblockquote %}

If you see the previous command prompt change to / #, we are already in a container. We test the network connection through the ping command, first ping Baidu Baidu.com (the -c parameter stands for the number of packets sent, here we set it to 5) :

Look at the image above, all connected with no lost packets (depending on your network environment). As you can see, the container has access to all networks (including localhost) to which the host is connected.

Docker network inspect alpine2 IP address = 172.17.0.3Perfect! We can access the Alpine2 container from Alpine1. As an exercise, see for yourself if you can ping Alpine1 from an Alpine2 container.

{% blockquote %} If you don’t want alpine1 to stop, remember to “detach” (the opposite of attach) by Ctrl + P + Ctrl + Q (hold Ctrl, then press P and Q), Instead of Ctrl + D. {% endblockquote %}

Custom network

The default Bridge network has one big problem: it can only be accessed through IP addresses. This is undoubtedly cumbersome, difficult to manage when there are many containers, and the IP may change every time.

Custom network is a good solution to this problem. In the same custom network, each container can communicate with each other by name, because Docker does the DNS parsing for us, a mechanism called Service Discovery. Specifically, we will create a custom network my-Net and create two containers, Alpine3 and Alpine4, connected to My-Net. First create a custom network my-net:

Docker network create my-net # docker network create my-net # docker network create --driver bridge my-netCopy the code

View all current networks:

docker network ls
Copy the code

You can see the my-net you just created:

NETWORK ID          NAME                DRIVER              SCOPE
4741cab870f1        bridge              bridge              local
a5ccbe18d2ab        host                host                local
5d8856725e7f        my-net              bridge              local
a18954d078b5        none                null                local
Copy the code

Create two new containers alPINE3 and Alpine4:

docker run -dit --name alpine3 --network my-net alpine
docker run -dit --name alpine4 --network my-net alpine
Copy the code

Specify the network the container wants to connect to (that is, the my-net you just created) with the –network parameter.

{% blockquote %} If you forget to specify the network when you first create and run the container, you can use docker network connect to specify the network again. The second is the container that needs to be connected alPINE3) :

docker network connect my-net alpine3
Copy the code

{% endblockquote %}

Enter Alpine3 and test whether alpine4 can be pinged:

You can see that Alpine4 is automatically resolved to 172.18.0.3. We can verify this by using Docker Network Inspect:

docker network inspect --format '{{range .Containers}}{{.Name}}: {{.IPv4Address}} {{end}}' my-net
Copy the code

Finishing touches

We have seen a general flow of the default and custom networks from the example above, and are now ready to finish the task of the container we created earlier:

docker rm -f alpine1 alpine2 alpine3 alpine4
Copy the code

Delete my-net as well:

docker network rm my-net
Copy the code

Hands-on practice

Now let’s formally carry out today’s theme content:

Containerized server

Let’s start by containerizing the back-end servers as well. Create server/Dockerfile as follows

FROM node:10 # specify directory /usr/src/app WORKDIR /usr/src/app # COPY package.json to the working directory COPY package.json Registry https://registry.npm.taobao.org && NPM install #. COPY the source code COPY. # set the environment variable (server host IP and port) ENV MONGO_URI=mongodb://dream-db:27017/todos ENV HOST=0.0.0.0 ENV PORT= 1000 "node", "index.js" ]Copy the code

From the above code we can see that the Dockerfile is much more complex this time than it was in [the previous tutorial]. The meaning of each line has been commented out in the code, so let’s see what’s new:

  • The RUN command is used to RUN any command in the container, and here we install all project dependencies through NPM Install (of course, we configured the NPM image earlier, so it can be installed faster).
  • The ENV directive is used to inject environment variables into the container, where we set up the database connection string MONGO_URI (note that we named the database dream-db, which we will create later), and configure the server’s HOST and PORT.
  • The EXPOSE directive is used to open port 4000. We did not specify port 8080 when using Nginx to container the front-end project, because the Nginx base image has been opened. The Node base image used here is not open and needs to be configured ourselves.
  • The CMD COMMAND is used to specify the start COMMAND of the container (that is, the COMMAND column of docker ps view), which is of course to keep the server running.

Pay attention to

A common mistake first-time container users make is forgetting to change the server’s host from localhost (127.0.0.1) to 0.0.0.0, making the server unreachable from outside the container.

Create server/.dockerignore and ignore the server logs access.log and node_modules as follows:

node_modules
access.log
Copy the code

Then run the following command from the project root directory to build a server image with the name dream-server:

docker build -t dream-server server
Copy the code

Then the build will automatically start downloading, at which point we’ll have to wait.

Connect the server to the database

Start by creating a custom network, Dream-Net

docker network create dream-net
Copy the code

Then use the official Mongo image to create and run the MongoDB container. The command is as follows:

docker run --name dream-db --network dream-net -d mongo
Copy the code

We named the container dream-DB, connected to a network dream-net, and ran it in background mode (-d).

prompt

All containers in the same custom network expose all ports to each other, making it easier for different applications to communicate with each other. At the same time, other ports of the container in the network cannot be accessed outside the network unless the port is manually opened through -p (–publish), achieving good isolation. Interoperability within the Network and isolation inside and outside the Network are also a major advantage of Docker Network.

dangerous

We do not set any authentication measures (such as user name and password) when we start the MongoDB database container. All requests to connect to the database can arbitrarily modify the data, which is extremely dangerous in the production environment. We will explain how to manage confidential information (such as passwords) in containers in a future article.

Then run the server container:

docker run -p 4000:4000 --name dream-api --network dream-net -d dream-server
Copy the code

Check the log output of the server container to confirm that the MongoDB connection is successful:

docker logs dream-api   
Copy the code

Success is achieved when you see the information below the output

Server is running on http://0.0.0.0:4000
Mongoose connected.
Copy the code

Containerized front end pages

In the project root directory, containerize with the following command:

docker build -t dream-client client
Copy the code

Then run the container:

docker run -p 8080:80 --name client -d dream-client
Copy the code

Docker ps command can be used to check whether all three containers are properly opened:

Finally, visit localhost:8080(server domain name :8080) and you can see that we have refreshed the page several times in the end and the data records are still there, indicating that our full-stack application with the database is running!