In practice, different services need to communicate with each other, such as back-end apis and databases. Fortunately, Docker provides us with a Network mechanism that makes it easy to connect containers. In this article, you will learn how to use the Docker network, learn how to use the default network and custom network, and become a dream architect who can connect multiple “dreams”!

In the last tutorial, we introduced you to the two key concepts of images and containers, familiarized you with common Docker commands, and successfully containerized your first application. But that’s just the prologue to our dream building journey. Next, we will implement the containerization of the back-end API server + database.

We have the application code ready for you, run the following command:

If you watched the last tutorial, the repository has been cloned
cd docker-dream
git fetch origin network-start
git checkout network-start

# If you plan to start directly with this tutorial
git clone -b network-start https://github.com/tuture-dev/docker-dream.git
cd docker-dream
Copy the code

Compared with the previous container front-end static page server, there is a more difficult point: server and database are two independent containers, but the server needs to connect and access the database, how to achieve communication between containers?

In Inception, it’s impossible to connect different dreams, but luckily Docker does — with the help of the Docker Network.

prompt

In the early days, Docker containers could connect to containers via the –link option of Docker run commands, but Docker has officially declared this method obsolete and will probably be removed (see documentation). This article will explain how Docker officially recommends connecting containers: User-defined Networks.

The Network type

A Network, as the name implies, enables different containers to communicate with each other. First of all, it is necessary to list the five drivers of Docker Network:

  • bridge: The default driver mode, namely “bridge”, is usually usedstand-alone(To be more precise, a single Docker daemon)
  • overlay: Overlay network is used to connect multiple Docker daemonsThe clusterDocker Swarm will be highlighted in future articles
  • host: Directly use the host (that is, the machine running Docker) network, only for Docker 17.06+ cluster service
  • macvlan: Macvlan Networks Allow each container to be displayed as a physical device by assigning a MAC address to it, suitable for applications (such as embedded systems, Internet of Things, and so on) that want to connect directly to a physical network
  • none: Disables all networks for this container

This article will focus on the default Bridge network driver. That’s right, the bridge that connects different dreams.

A profound

We still understand and feel Bridge Network through some small experiments. Unlike the previous section, we will use the Alpine Linux image as our experimental raw material because:

  • Very lightweight and compact (the entire image is only about 5MB)
  • More features than the Swiss Army Knife Busybox

Bridge networks can be divided into two categories:

  1. Default network (Docker runtime built-in, not recommended for production environment)
  2. Custom Network (recommended)

Let’s try it out separately.

The default network

Here’s what this little experiment looks like:

We will connect two containers alPINE1 and Alpine2 on the default Bridge network. Run the following command to view the existing network:

docker network ls
Copy the code

You should see the following output (note that the ID on your machine is probably different) :

NETWORK ID          NAME                DRIVER              SCOPE
cb33efa4d163        bridge              bridge              local
010deedec029        host                host                local
772a7a450223        none                null                local
Copy the code

These three default networks correspond to the bridge, Host, and None network types above, respectively. Next we will create two containers named Alpine1 and Alpine2 with the following command:

docker run -dit --name alpine1 alpine
docker run -dit --name alpine2 alpine
Copy the code

-dit is the combination of -d (background mode), -I (interactive mode), and -t (virtual terminal). With this combination, we can keep the container running in the background without exiting (yes, “idling”).

Use docker ps command to confirm that the above two containers are running in the background:

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
501559d2fab7        alpine              "/bin/sh"           2 seconds ago       Up 1 second                             alpine2
18bed3178732        alpine              "/bin/sh"           3 seconds ago       Up 2 seconds                            alpine1
Copy the code

To view details of the default Bridge network, run the following command:

docker network inspect bridge
Copy the code

Network details should be output in JSON format:

[{"Name": "bridge"."Id": "cb33efa4d163adaa61d6b80c9425979650d27a0974e6d6b5cd89fd743d64a44c"."Created": "The 2020-01-08 T07:29:11. 102566065 z"."Scope": "local"."Driver": "bridge"."EnableIPv6": false."IPAM": {
      "Driver": "default"."Options": null."Config": [{"Subnet": "172.17.0.0/16"."Gateway": "172.17.0.1"}},"Internal": false."Attachable": false."Ingress": false."ConfigFrom": {
      "Network": ""
    },
    "ConfigOnly": false."Containers": {
      "18bed3178732b5c7a37d7ad820c111fac72a6b0f47844401d60a18690bd37ee5": {
        "Name": "alpine1"."EndpointID": "9c7d8ee9cbd017c6bbdfc023397b23a4ce112e4957a0cfa445fd7f19105cc5a6"."MacAddress": "02:42:ac:11:00:02"."IPv4Address": "172.17.0.2/16"."IPv6Address": ""
      },
      "501559d2fab736812c0cf181ed6a0b2ee43ce8116df9efbb747c8443bc665b03": {
        "Name": "alpine2"."EndpointID": "da192d61e4b2df039023446830bf477cc5a9a026d32938cb4a350a82fea5b163"."MacAddress": "02:42:ac:11:00:03"."IPv4Address": "172.17.0.3/16"."IPv6Address": ""}},"Options": {
      "com.docker.network.bridge.default_bridge": "true"."com.docker.network.bridge.enable_icc": "true"."com.docker.network.bridge.enable_ip_masquerade": "true"."com.docker.network.bridge.host_binding_ipv4": "0.0.0.0"."com.docker.network.bridge.name": "docker0"."com.docker.network.driver.mtu": "1500"
    },
    "Labels": {}}]Copy the code

We focus on two fields:

  • IPAM: IP Address Management. You can see that the gateway Address is172.17.0.1(Because space is limited, want to knowThe gatewayStudents can consult information on computer networks and TCP/IP protocols by themselves.)
  • Containers: includes all containers connected on this network, as you can see from the one we just createdalpine1alpine2And their IP addresses are respectively172.17.0.2172.17.0.3(at the back of the/ 16Is a subnet mask.

prompt

If you are familiar with the Go template syntax, you can use the -f (format) parameter to filter out unwanted information. For example, we only want to view the gateway address of bridge:

$ docker network inspect --format '{{json .IPAM.Config }}' bridge
[{"Subnet":"172.17.0.0/16"."Gateway":"172.17.0.1"}]
Copy the code

Let’s go into the Alpine1 container:

docker attach alpine1
Copy the code

Pay attention to

The attach command can only enter containers that are set to run interactively (that is, with the -i parameter added at startup).

If you see the previous command prompt change to / #, we are already in a container. Let’s test the network connection by using the ping command. First, ping the main station of Tuture. co (the -c parameter represents the number of packets sent, which we set to 5) :

/ # ping -c 5 tuture.coPING tuture.co (150.109.19.98): 56 data bytes 64 bytes from 150.109.19.98: Seq =2 TTL =37 time=65.294 ms 64 bytes from 150.109.19.98: seq=3 TTL =37 time=65.425 ms 64 bytes from 150.109.19.98: Seq =4 TTL =37 time= 30.3ms -- tuture. Co ping statistics -- 3 packets transmitted, 3 packets transmitted, 40% packet loss round-trip min/ AVG/Max = 65.294/65.350/65.425msCopy the code

OK, although you lost a few packages, you can connect them (depending on your network environment, it is normal to lose all packages). As you can see, the container has access to all networks (including localhost) to which the host is connected.

Docker network inspect alpine2 IP address = 172.17.0.3

/ # ping -c 5 172.17.0.3PING 172.17.0.3 (172.17.0.3): 56 data bytes 64 bytes from 172.17.0.3: Seq =0 TTL =64 time=0.147 ms 64 bytes from 172.17.0.3: seq=1 TTL =64 time=0.103 ms 64 bytes from 172.17.0.3: seq=1 TTL =64 time=0.103 ms 64 bytes from 172.17.0.3: Seq =2 TTL =64 time=0.102 ms 64 bytes from 172.17.0.3: seq=3 TTL =64 time=0.125 ms 64 bytes from 172.17.0.3: seq=3 TTL =64 time=0.125 ms 64 bytes from 172.17.0.3: Seq =4 TTL =64 time= 0.129ms -- 172.17.0.3 ping statistics -- 5 packets transmitted, 5 packets received 0% packet loss round-trip min/ AVg/Max = 0.102/0.120/0.147msCopy the code

Perfect! We can access the Alpine2 container from Alpine1. As an exercise, see for yourself if you can ping Alpine1 from an Alpine2 container.

Pay attention to

If you don’t want alpine1 to stop, “detach” (the opposite of attach) from the container by Ctrl + P + Ctrl + Q (hold Ctrl, then press P and Q) instead of Ctrl + D.

Custom network

If you follow along, the default Bridge network has one big problem: it can only be accessed through IP addresses. This is undoubtedly cumbersome, difficult to manage when there are many containers, and the IP may change every time.

Custom network is a good solution to this problem. In the same custom network, each container can communicate with each other by name, because Docker does the DNS parsing for us, a mechanism called Service Discovery. Specifically, we will create a custom network, My-Net, and create two containers, Alpine3 and Alpine4, connected to My-Net, as shown below.

Let’s get started. First create a custom network my-net:

docker network create my-net
Since the default network driver is Bridge, it is equivalent to the following command
# docker network create --driver bridge my-net
Copy the code

View all current networks:

docker network ls
Copy the code

You can see the my-net you just created:

NETWORK ID          NAME                DRIVER              SCOPE
cb33efa4d163        bridge              bridge              local
010deedec029        host                host                local
feb13b480be6        my-net              bridge              local
772a7a450223        none                null                local
Copy the code

Create two new containers alPINE3 and Alpine4:

docker run -dit --name alpine3 --network my-net alpine
docker run -dit --name alpine4 --network my-net alpine
Copy the code

As you can see, we specify the network the container wants to connect to (that is, the my-net we just created) with the –network parameter.

prompt

If you forget to specify the network when you first create and run the container, you can use the Docker network connect command to specify the network again (the first parameter is the network name my-net, the second is the container to connect to alpine3) :

docker network connect my-net alpine3
Copy the code

Enter Alpine3 and test whether alpine4 can be pinged:

$ docker attach alpine3
/ # ping -c 5 alpine4PING Alpine4 (172.19.0.3): 56 Data bytes 64 bytes from 172.19.0.3: seq=0 TTL =64 time=0.247 ms 64 bytes from 172.19.0.3: Seq =1 TTL =64 time=0.176 ms 64 bytes from 172.19.0.3: seq=2 TTL =64 time=0.180 ms 64 bytes from 172.19.0.3: seq=2 TTL =64 time=0.180 ms 64 bytes from 172.19.0.3: Seq =3 TTL =64 time=0.176 ms 64 bytes from 172.19.0.3: Seq =4 TTL =64 time= 0.169ms -- pine4 ping statistics -- 5 packets transmitted, 5 packets received 0% packet loss round-trip min/ AVG/Max = 0.161/0.188/0.247msCopy the code

You can see that Alpine4 is automatically resolved to 172.19.0.3. We can verify this by using Docker Network Inspect:

$ docker network inspect --format '{{range .Containers}}{{.Name}}: {{.IPv4Address}} {{end}}'My-net ALPINE4:172.19.0.3/16 ALPINE3:172.19.0.2/16Copy the code

Alpine4 IP is 172.19.0.3.

Some finishing touches

Now that the experiment is done, let’s destroy all the previous containers:

docker rm -f alpine1 alpine2 alpine3 alpine4
Copy the code

Delete my-net as well:

docker network rm my-net
Copy the code

Hands-on practice

Containerized server

Let’s start by containerizing the back-end servers as well. Create server/Dockerfile as follows:

FROM node:10

/usr/ SRC /app /usr/ SRC /app
WORKDIR /usr/src/app

# copy package.json to your working directory
COPY package.json .

# Install NPM dependencies
RUN npm config set registry https://registry.npm.taobao.org && npm install

# copy source code
COPY.
Set environment variables (server host IP and port)
ENV MONGO_URI=mongodb://dream-db:27017/todos
ENV HOST=0.0.0.0
ENV PORT=4000

Open port 4000
EXPOSE 4000

# set the mirror run command
CMD [ "node"."index.js" ]
Copy the code

As you can see, the Dockerfile is a lot more complex than it was in the last tutorial. The meaning of each line has been commented out in the code, so let’s take a look at what’s new:

  • RUNThe directive is used to run any command in the container, and here we passnpm installInstall all project dependencies (of course, the NPM image was configured earlier, so it can be installed faster)
  • ENVThe directive is used to inject environment variables into the container, where we set the database connection stringMONGO_URI(Note that the name of the database isdream-db, the container will be created later), also configured for the serverHOSTPORT
  • EXPOSEThe directive is used to open port 4000. We did not specify port 8080 when using Nginx to container the front-end project, because the Nginx base image has been opened. The Node base image used here is not open and needs to be configured ourselves
  • CMDThe directive is used to specify the start command for this container (i.edocker psCOMMAND column), which of course means to keep the server running. This is covered in more detail later in the section called “Recall and Sublimation.”

Pay attention to

A common mistake first-time container users make is forgetting to change the server’s host from localhost (127.0.0.1) to 0.0.0.0, making the server unreachable from outside the container (I wasted a lot of time learning this myself).

Create server/.dockerignore and ignore the server logs access.log and node_modules as follows:

node_modules
access.log
Copy the code

Build the server image by running the following command in the project root directory, specifying the name dream-server:

docker build -t dream-server server
Copy the code

Connect the server to the database

Based on the previous knowledge, we created a custom network for our dream List application, dream-net:

docker network create dream-net
Copy the code

Create and run the MongoDB container using the official Mongo image as follows:

docker run --name dream-db --network dream-net -d mongo
Copy the code

We named the container dream-db (remember that name), connected to a network called Dream-net, and ran in background mode (-d).

prompt

You might ask, why is there no port mapping specified when the container is opened here? Because all containers in the same custom network expose all ports to each other, different applications can communicate with each other more easily; At the same time, other ports of the container in the network cannot be accessed outside the network unless the port is manually opened through -p (–publish), achieving good isolation. Interoperability within the Network and isolation inside and outside the Network are also a major advantage of Docker Network.

Danger!

We do not set any authentication measures (such as user name and password) when we start the MongoDB database container. All requests to connect to the database can arbitrarily modify the data, which is extremely dangerous in the production environment. We will explain how to manage confidential information (such as passwords) in containers in a future article.

Then run the server container:

docker run -p 4000:4000 --name dream-api --network dream-net -d dream-server
Copy the code

Check the log output of the server container to confirm that the MongoDB connection is successful:

$docker logs dream-API Server is running on http://0.0.0.0:4000 Mongoose connected.Copy the code

You can then test a wave of server apis using Postman or curl (localhost:4000), which is omitted here to save space. Of course, you can skip it, because soon we’ll be able to manipulate data through the front end!

Containerized front end pages

As implemented in the previous article, containerization is done in the project root directory with the following command:

docker build -t dream-client client
Copy the code

Then run the container:

docker run -p 8080:80 --name client -d dream-client
Copy the code

Docker ps command can be used to check whether all three containers are properly opened:

Finally, access localhost:8080:

As you can see, we have refreshed the page several times in the end, and the data records are still there, indicating that our full-stack application with the database is running! Let’s interactively execute into the database container Dream-DB and simply query a wave of data with Mongo Shell:

$ docker exec -it dream-db mongo
MongoDB shell version v3.4.10
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.10
Welcome to the MongoDB shell.
For interactive help.type "help".
> use todos
switched to db todos
> db.getCollection('todos').find()
{ "_id" : ObjectId("5e171fda820251a751aae6f5"), "completed" : true."text" : "Learn about Docker Network"."timestamp" : ISODate("The 2020-01-09 T12:43:06. 865 z"), "__v": 0} {"_id" : ObjectId("5e171fe08202517c11aae6f6"), "completed" : true."text" : "Set up the default network"."timestamp" : ISODate("The 2020-01-09 T12: take 205 z"), "__v": 0} {"_id" : ObjectId("5e171fe3820251d1a4aae6f7"), "completed" : false."text" : "Build a Custom Network"."timestamp" : ISODate("The 2020-01-09 T12: the men. 962 z"), "__v": 0}Copy the code

Perfect! Then press Ctrl + D to exit.

Memory and sublimation

Understand the command: The theme of the dream

Each container, from the moment it was created, was destined to run a Command, just as a dream is destined to have a theme, a tone. When running Docker PS, you should have noticed that the COMMAND column is the COMMAND that each container runs. So how do we specify container commands? Can you run new commands?

First, we specify container commands in two main ways:

Provide default commands through Dockerfile

When building the image, we can specify commands at the end of the Dockerfile with a CMD command, such as the [“node”, “server.js”] command when building the back-end server. When specifying a command, we can write it in three ways:

  • CMD ["executable","param1","param2"](exec format,recommended)
  • CMD ["param1","param2"](Used in conjunction with Entrypoint)
  • CMD command param1 param2(Shell format)

Where the executable stands for the path to the executable file, such as node, /bin/sh; Param1 and param2 represent parameters. We will discuss the use of Entrypoint when we discuss advanced use of Dockerfile later in this article.

Pay attention to

When using the first exec format, you must use double quotes because the entire command will be parsed in JSON format.

prompt

[“sh”, “-c”, “echo $HOME”] [“sh”, “-c”, “echo $HOME”] [“sh”, “-c”, “echo $HOME”]

Specify commands when creating or running containers

When creating or running a container, you can override commands specified when building the image by adding command parameters, such as:

docker run nginx echo hello
Copy the code

By specifying the echo hello command argument, the container will print a hello and exit instead of running the default nginx -g ‘daemon off; ‘.

Of course, as we did in the first article, we can also specify the command as bash (or sh, mongo, Node, and other interactive programs) and then, with the it option, go into the container and run it interactively.

Run the new command through exec

With Docker Exec, we can make already running containers execute new commands. For example, for our previous dream-DB container, we created a database backup with the mongodump command:

docker exec dream-db mongodump
Copy the code

Docker execit can then be used to interactively run dream-db with docker execit.

$ docker exec -it dream-db bash
root@c51d9355d8da:/# ls dump/
admin  todos
Copy the code

Again, press Ctrl + D to exit.

prompt

You may be wondering why pressing Ctrl + D during docker Run interactively stops the container, but exiting under Docker exec does not stop the container. Because Docker Exec-it runs a new terminal process on an existing container without affecting the previous main command process. The container does not stop as long as the main process does not end.

Tip: How to easily remember dozens of Docker commands?

In the actual combat, we also touched a lot of new Docker commands, how to remember so many commands? In fact, most docker commands conform to the following format:

Docker < object type > < Operation name > [Other options and parameters]Copy the code
  • Object typeThe types of Docker objects we have been exposed to so far includeThe container container,The mirror imagenetwork network
  • The operation name: operations can be divided into two broad categories: 1) operations that apply to all objects, for examplels,rm,inspectpruneAnd so on; 2) Object specific operations, such as container specific operationsrunOperation, mirror exclusivebuildOperation, as well as network proprietaryconnectOperation, etc.
  • Other options and parameters: byhelpCommand or--helpView the specific options and parameters of each command

Because some commands are very common, Docker also provides a convenient shorthand command, such as show all current containers Docker container ls, can be shortened to Docker ps.

Let’s start by reviewing commands on Container objects (red represents operations that apply to all objects, blue represents operations specific to that object) :

A review of the commands on the Image object:

Finally, a review of commands on Network objects:

This concludes the tutorial. But we’re just beginning our journey — there are still a lot of issues: 1) the front-end application is not available in any environment other than native (because the backend API accessed is hard-coded localhost); 2) Not actually deployed to a remote machine; 3) MongoDB is still running naked (no password is set). Don’t be square, we will solve that in the next tutorial.

Want to learn more exciting practical skills tutorial? Come and visit the Tooquine community.