One of the things that makes Docker containers and services so powerful is that you can wire them together or connect them to non-Docker workloads. Docker containers and services do not even need to know that they are deployed on Docker or that their peers are also Docker workloads. Whether your Docker host runs Linux, Windows, or a combination of the two, you can use Docker to manage them in a platform-independent way.

This topic defines some basic Docker networking concepts and helps you design and deploy applications to take full advantage of these capabilities.

Network driver

Docker’s network subsystem using drivers is pluggable. By default, several drivers exist and provide core networking functionality:

  • bridge: The default network driver. If you do not specify a driver, this is the type of network you are creating.Bridge networks are typically used when your applications are running in separate containers that need to communicate.
  • host: For independent containers, remove the network isolation between the container and the Docker host, and directly use the network of the host.hostOnly available for clustering services on Docker 17.06 and later.
  • overlay:overlayThe network connects multiple Docker daemons together and enables cluster services to communicate with each other. You can also useoverlayNetwork to facilitate communication between cluster services and independent containers or between two independent containers on different Docker daemons. This strategy eliminates the need for operating system-level routing between these containers
  • macvlan:Macvlan networks allow you to assign MAC addresses to containers that appear as physical devices on the network. The Docker daemon routes traffic to the container by its MAC address. When dealing with traditional applications that want to connect directly to the physical network rather than route through the Docker host’s network stack, using the MACVLAN driver is sometimes the best choice.
  • none: Disable all networking for this container. Typically used with custom network drivers.noneNot applicable to Swarm Services.
  • Web plugins: You can install and use third-party web plugins in Docker. These plug-ins are available from Docker Hub or third-party vendors. Refer to the vendor’s documentation for information on installing and using a given network plug-in.

Summary of Network Drivers

  • When multiple containers need to communicate on the same Docker host, a user-defined bridging network is the best choice.
  • When the network stack should not be isolated from the Docker host and you want to isolate other aspects of the container, the host network is the best choice.
  • Overlay networks are best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using cluster services.
  • The Macvlan network is best when migrating from a VM setup or when a container needs to look like a physical host on the network, with each host having a unique MAC address.
  • Third-party networking plug-ins allow you to integrate Docker with a private network stack.

Networking of standalone Docker containers

This section contains three different guides that you can run on different systems, but for the last two, you’ll need an extra Docker host to run.

  • Use the Default Bridging Network: This section demonstrates how to use the default bridging network that Docker automatically sets up for you. The network is not the best choice for a production system.
  • Using user-defined Bridge Networks: shows how to create and use your own custom bridge networks to connect containers running on the same Docker host. It is recommended for stand-alone containers running in production.

Although overlay networks are typically used for clustered services, Docker 17.06 and later allows you to use overlay networks for standalone containers. This is part of a tutorial on using overlay Networks.

Use the default bridging network

In this example, you started two different Alpine containers on the same Docker host and did some tests to see how they talked to each other. You need to install and run Docker.

1. Open a terminal window. List the current network before doing anything else. If you’ve never added networking or initializing groups on this Docker daemon, you should see the following. You may see different networks, but you should at least see the following (network ids will vary) :

$ docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
17e324f45964        bridge              bridge              local
6ed54d316334        host                host                local
7092879f2cc8        none                null                local
Copy the code

Lists the default bridging networks, as well as host and None. The latter two are not fully fledged networks, but are used to start containers that connect directly to the network stack of the Docker daemon host, or containers that do not contain network devices. This tutorial will connect two containers to a bridge network.

2. Open two Alpine containers to run ash, which is Alpine’s default shell, not bash. The -dit flag means to start separate containers (in the background), interactive (with the ability to type), and TTY (so you can see inputs and outputs). Because you are started separately, you will not connect to the container immediately. Instead, it prints the ID of the container. Because you did not specify any –network option, the container will connect to the default bridge network.

$ docker run -dit --name alpine1 alpine ash

$ docker run -dit --name alpine2 alpine ash
Copy the code

Check that both containers are actually started:

$ docker container ls

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
602dbf1edc81        alpine              "ash"               4 seconds ago       Up 3 seconds                            alpine2
da33b7aa74b0        alpine              "ash"               17 seconds ago      Up 16 seconds                           alpine1
Copy the code

3. Check the bridge network to see which containers are connected.

$ docker network inspect bridge

[
    {
        "Name": "bridge"."Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10"."Created": "The 2017-06-22 T20:27:43. 826654485 z"."Scope": "local"."Driver": "bridge"."EnableIPv6": false."IPAM": {
            "Driver": "default"."Options": null,
            "Config": [{"Subnet": "172.17.0.0/16"."Gateway": "172.17.0.1"}},"Internal": false."Attachable": false."Containers": {
            "602dbf1edc81813304b6cf0a647e65333dc6fe6ee6ed572dc0f686a3307c6a2c": {
                "Name": "alpine2"."EndpointID": "03b6aafb7ca4d7e531e292901b43719c0e34cc7eef565b38a6bf84acf50f38cd"."MacAddress": "02:42:ac:11:00:03"."IPv4Address": "172.17.0.3/16"."IPv6Address": ""
            },
            "da33b7aa74b0bf3bda3ebd502d404320ca112a268aafe05b4851d1e3312ed168": {
                "Name": "alpine1"."EndpointID": "46c044a645d6afc42ddd7857d19e9dcfb89ad790afb5c239a35ac0af5e8a5bc5"."MacAddress": "02:42:ac:11:00:02"."IPv4Address": "172.17.0.2/16"."IPv6Address": ""}},"Options": {
            "com.docker.network.bridge.default_bridge": "true"."com.docker.network.bridge.enable_icc": "true"."com.docker.network.bridge.enable_ip_masquerade": "true"."com.docker.network.bridge.host_binding_ipv4": "0.0.0.0"."com.docker.network.bridge.name": "docker0"."com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}}]Copy the code

Near the top, information about the bridge network is listed, including the IP address (172.17.0.1) of the gateway between the Docker host and the bridge network. Under the Containers item, each connected container and its IP address information are listed (LPINE1 172.17.0.2, AlPINE2 172.17.0.3)

4. The container runs in the background. Connect to Alpine1 using the Docker attach command.

$ docker attach alpine1

/ #
Copy the code

The prompt changes to # to indicate that you are the root user in the container. Use the IP addr show command to view alpine1’s network interface from the container:

# ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd Ff :ff:ff:ff:ff:ff :ff inet 172.17.0.2/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:2/64 scope link valid_lft forever preferred_lft foreverCopy the code

The first interface is the loopback device. Ignore that for now. Note that the IP address of the second interface is 172.17.0.2, which is the same address as Alpine1 shown in the previous step.

5. Inside Alpine1, ping Baidu.com to make sure you can connect to the Internet. The -c 2 flag limits the command to two ping attempts.

➜ ~ ping -c 2 baidu.com ping baidu.com (39.156.69.79): 56 data bytes 64 bytes from 39.156.69.79: Icmp_seq =0 TTL =46 time= 34.864 ms 64 bytes from 39.156.69.79: Icmp_seq =1 TTL =46 time= 790ms ping statistics -- 3 packets transmitted, 3 packets received 0.0% packet loss round - trip min/avg/Max/stddev = / ms ➜ 41.880/3.497 ~ 34.887/38.383Copy the code

6. Now try ping the second container. First, ping the IP address 172.17.0.3:

# ping -c 2 172.17.0.3

PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.094 ms

--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.086/0.090/0.094 ms
Copy the code

It worked. Next, try ping the Alpine2 container by container name. This will fail.

# ping -c 2 alpine2

ping: bad address 'alpine2'
Copy the code

7. Detach from Alpine1 without stopping it by using the detach sequence CTRL + P CTRL + Q (hold CTRL and type P followed by Q). If you wish, attach to Alpine2 and repeat steps 4, 5, and 6 there, replacing Alpine1 with Alpine2.

8. Stop and delete both containers.

$ docker container stop alpine1 alpine2
$ docker container rm alpine1 alpine2
Copy the code

Keep in mind that it is not recommended to use the default bridge network for production.

Use a user-defined bridge network

In this example, we start the two Alpine containers again, but attach them to the user-defined network we have created called Alpine-net. These containers are not connected to the default bridge network at all. We then launch a third Alpine container that is connected to the default bridge network but not to Alpine net, and a fourth Alpine container that is connected to both networks.

1. Create alpine net. You do not need to specify the –driver Bridge tag because it is the default, but this example shows how to specify it:

$ docker network create --driver bridge alpine-net
Copy the code

2. List Docker networks

$ docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
e9261a8c9a19        alpine-net          bridge              local
17e324f45964        bridge              bridge              local
6ed54d316334        host                host                local
7092879f2cc8        none                null                local
Copy the code

Check out the Alpine net network. This shows its IP address and the fact that no containers are connected to it.


$ docker network inspect alpine-net

[
    {
        "Name": "alpine-net"."Id": "e9261a8c9a19eabf2bf1488bf5f208b99b1608f330cff585c273d39481c9b0ec"."Created": "The 2017-09-25 T21: pining sickness. 620046142 z"."Scope": "local"."Driver": "bridge"."EnableIPv6": false."IPAM": {
            "Driver": "default"."Options": {},
            "Config": [{"Subnet": "172.18.0.0/16"."Gateway": "172.18.0.1"}},"Internal": false."Attachable": false."Containers": {},
        "Options": {},
        "Labels": {}}]Copy the code

Note that the gateway for this network is 172.18.0.1, whereas the gateway for the default bridge network is 172.17.0.1. The exact IP address on your system may differ.

3. Create your fourth container, note the –network option. You can only connect to one network in docker run. So, you need to connect the fourth container to the default bridge network using the Docker Network connect command.

$ docker run -dit --name alpine1 --network alpine-net alpine ash

$ docker run -dit --name alpine2 --network alpine-net alpine ash

$ docker run -dit --name alpine3 alpine ash

$ docker run -dit --name alpine4 --network alpine-net alpine ash

$ docker network connect bridge alpine4
Copy the code

Verify that all containers are running:

$ docker container ls

CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
156849ccd902        alpine              "ash"               41 seconds ago       Up 41 seconds                           alpine4
fa1340b8d83e        alpine              "ash"               51 seconds ago       Up 51 seconds                           alpine3
a535d969081e        alpine              "ash"               About a minute ago   Up About a minute                       alpine2
0a02c449a6e9        alpine              "ash"               About a minute ago   Up About a minute                       alpine1
Copy the code

4. Check the default bridge network and alpine net again:

$ docker network inspect bridge

[
    {
        "Name": "bridge"."Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10"."Created": "The 2017-06-22 T20:27:43. 826654485 z"."Scope": "local"."Driver": "bridge"."EnableIPv6": false."IPAM": {
            "Driver": "default"."Options": null,
            "Config": [{"Subnet": "172.17.0.0/16"."Gateway": "172.17.0.1"}},"Internal": false."Attachable": false."Containers": {
            "156849ccd902b812b7d17f05d2d81532ccebe5bf788c9a79de63e12bb92fc621": {
                "Name": "alpine4"."EndpointID": "7277c5183f0da5148b33d05f329371fce7befc5282d2619cfb23690b2adf467d"."MacAddress": "02:42:ac:11:00:03"."IPv4Address": "172.17.0.3/16"."IPv6Address": ""
            },
            "fa1340b8d83eef5497166951184ad3691eb48678a3664608ec448a687b047c53": {
                "Name": "alpine3"."EndpointID": "5ae767367dcbebc712c02d49556285e888819d4da6b69d88cd1b0d52a83af95f"."MacAddress": "02:42:ac:11:00:02"."IPv4Address": "172.17.0.2/16"."IPv6Address": ""}},"Options": {
            "com.docker.network.bridge.default_bridge": "true"."com.docker.network.bridge.enable_icc": "true"."com.docker.network.bridge.enable_ip_masquerade": "true"."com.docker.network.bridge.host_binding_ipv4": "0.0.0.0"."com.docker.network.bridge.name": "docker0"."com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}}]Copy the code

Alpine3 and Alpine4 containers are connected to the default bridging network

$ docker network inspect alpine-net

[
    {
        "Name": "alpine-net"."Id": "e9261a8c9a19eabf2bf1488bf5f208b99b1608f330cff585c273d39481c9b0ec"."Created": "The 2017-09-25 T21: pining sickness. 620046142 z"."Scope": "local"."Driver": "bridge"."EnableIPv6": false."IPAM": {
            "Driver": "default"."Options": {},
            "Config": [{"Subnet": "172.18.0.0/16"."Gateway": "172.18.0.1"}},"Internal": false."Attachable": false."Containers": {
            "0a02c449a6e9a15113c51ab2681d72749548fb9f78fae4493e3b2e4e74199c4a": {
                "Name": "alpine1"."EndpointID": "c83621678eff9628f4e2d52baf82c49f974c36c05cba152db4c131e8e7a64673"."MacAddress": "02:42:ac:12:00:02"."IPv4Address": "172.18.0.2/16"."IPv6Address": ""
            },
            "156849ccd902b812b7d17f05d2d81532ccebe5bf788c9a79de63e12bb92fc621": {
                "Name": "alpine4"."EndpointID": "058bc6a5e9272b532ef9a6ea6d7f3db4c37527ae2625d1cd1421580fd0731954"."MacAddress": "02:42:ac:12:00:04"."IPv4Address": "172.18.0.4/16"."IPv6Address": ""
            },
            "a535d969081e003a149be8917631215616d9401edcb4d35d53f00e75ea1db653": {
                "Name": "alpine2"."EndpointID": "198f3141ccf2e7dba67bce358d7b71a07c5488e3867d8b7ad55a4c695ebb8740"."MacAddress": "02:42:ac:12:00:03"."IPv4Address": "172.18.0.3/16"."IPv6Address": ""}},"Options": {},
        "Labels": {}}]Copy the code

Alpine1, Alpine2, and Alpine4 containers are connected to the Alpine net network.

5. On user-defined networks like Alpine-Net, containers can not only communicate by IP address, but can also resolve container names to IP addresses. This feature is called automatic service discovery. Let’s connect to Alpine1 and test it out. Alpine1 should be able to resolve AlPINE2 and AlPINE4 (and Alpine1 itself) into IP addresses.

$ docker container attach alpine1

# ping -c 2 alpine2PING Alpine2 (172.18.0.3): 56 Data bytes 64 bytes from 172.18.0.3: seq=0 TTL =64 time= 0.085ms 64 bytes from 172.18.0.3: Seq =1 TTL =64 time= 0.03ms -- pine2 ping statistics -- 3 packets transmitted, 2 packets received 0% packet loss round-trip min/ AVG/Max = 0.085/0.087/0.090ms# ping -c 2 alpine4PING Alpine4 (172.18.0.4): 56 Data bytes 64 bytes from 172.18.0.4: seq=0 TTL =64 time=0.076 ms 64 bytes from 172.18.0.4: Seq =1 TTL =64 time= 0.03ms -- pine4 ping statistics -- 3 packets transmitted, 3 packets received 0% packet loss round-trip min/ AVG/Max = 0.076/0.083/0.091ms# ping -c 2 alpine1PING Alpine1 (172.18.0.2): 56 Data bytes 64 bytes from 172.18.0.2: seq=0 TTL =64 time=0.026 ms 64 bytes from 172.18.0.2: Seq =1 TTL =64 time= 0.075ms -- pine1 ping statistics -- 3 packets transmitted, 2 packets received 0% packet loss round-trip min/ AVG/Max = 0.026/0.040/0.054msCopy the code

6. From Alpine1, you cannot connect to Alpine3 because it is not in alpine-net.

# ping -c 2 alpine3

ping: bad address 'alpine3'
Copy the code

Not only that, you can’t connect from Alpine1 to Alpine3 via its IP address. Look at the output of Docker Network Inspect, find the IP address of Alpine3 :172.17.0.2, and try ping it.

# ping -c 2 172.17.0.2PING 172.17.0.2 (172.17.0.2) : 56 Data bytes -- 172.17.0.2 ping statistics -- 2 packets transmitted, 0 packets received, 100% packet lossCopy the code

Detach from Alpine1 using the detach sequence CTRL + P CTRL + Q (hold CTRL and type P followed by Q).

7. Remember that Alpine4 is connected to the default bridge network and Alpine-net. It should be able to reach all other containers. However, you will need to address Alpine3 at its IP address. Attach to it and run the tests.

$ docker container attach alpine4

# ping -c 2 alpine1PING Alpine1 (172.18.0.2): 56 Data bytes 64 bytes from 172.18.0.2: seq=0 TTL =64 time=0.074 ms 64 bytes from 172.18.0.2: Seq =1 TTL =64 time= 0.03ms -- pine1 ping statistics -- 3 packets transmitted, 2 packets received 0% packet loss round-trip min/ AVG/Max = 0.074/0.078/0.082ms# ping -c 2 alpine2PING Alpine2 (172.18.0.3): 56 Data bytes 64 bytes from 172.18.0.3: seq=0 TTL =64 time= 0.075ms 64 bytes from 172.18.0.3: Seq =1 TTL =64 time= 0.03ms -- pine2 ping statistics -- 3 packets transmitted, 2 packets received 0% packet loss round-trip min/ AVg/Max = 0.075/0.077/0.080ms# ping -c 2 alpine3
ping: bad address 'alpine3'

# ping -c 2 172.17.0.2PING 172.17.0.2 (172.17.0.2): 56 data bytes 64 bytes from 172.17.0.2: Seq =0 TTL =64 time=0.089 ms 64 bytes from 172.17.0.2: Seq =1 TTL =64 time= 0.075ms -- ping statistics -- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/ AVG/Max = 0.075/0.082/0.089ms# ping -c 2 alpine4PING Alpine4 (172.18.0.4): 56 Data bytes 64 bytes from 172.18.0.4: seq=0 TTL =64 time=0.033 ms 64 bytes from 172.18.0.4: Seq =1 TTL =64 time= 0.064ms -- pine4 ping statistics -- 3 packets transmitted, 3 packets received 0% packet loss round-trip min/ AVG/Max = 0.033/0.048/0.064msCopy the code

8. As a final test, ping www.baidu.com to ensure that all containers are connected to the network. Since you are currently connected to Alpine4, start with it, then exit Alpine4 and connect to Alpine3 (the container is only connected to the default bridge network) and retry. Finally connect to Alpine1 (which only connects to the Alpine-Net network) and retry.

# ping -c 2 baidu.comPING baidu.com (172.217.3.174): 56 data bytes 64 bytes from 172.217.3.174: Seq =0 TTL =41 time=9.778 ms 64 bytes from 172.217.3.174: Seq =1 TTL =41 time= 9.967 ms -- 3 packets transmitted, 3 packets received 0% packet loss round-trip min/ AVg/Max = 9.634/9.706/9.778 ms CTRL+ P CTRL+ Q $DOCker container attach alpine3# ping -c 2 baidu.comPING baidu.com (172.217.3.174): 56 data bytes 64 bytes from 172.217.3.174: Seq =0 TTL =41 time=9.706 ms 64 bytes from 172.217.3.174: Seq =1 TTL =41 time= 9.751 ms ping statistics -- 3 packets transmitted, 3 packets received 0% packet loss round-trip min/ AVg/Max = 9.706/9.779/9.851ms CTRL+ P CTRL+ Q $DOCker container attach alPINE1# ping -c 2 baidu.comPING baidu.com (172.217.3.174): 56 data bytes 64 bytes from 172.217.3.174: Seq =0 TTL =41 time=9.606 ms 64 bytes from 172.217.3.174: Seq =1 TTL =41 time= 9.971 ms ping statistics -- 3 packets transmitted, 3 packets received 0% packet loss round-trip min/ AVg/Max = 9.603/9.604/9.606 ms CTRL+ P CTRL+qCopy the code

9. Finally stop and delete all containers and delete the Alpine net network

$ docker container stop alpine1 alpine2 alpine3 alpine4

$ docker container rm alpine1 alpine2 alpine3 alpine4

$ docker network rm alpine-net
Copy the code

Overlay network connected to the Internet

This series of tutorials discusses networking for clustered services. This topic contains four sections that you can run on different platforms, but for the last two, you’ll need an extra Docker host.

  • Use the defaultoverlay networkDemonstrates how to use the default Docker automatically sets for you when initializing or joining a clusteroverlay network. The network is not the best choice for a production system.
  • Use user-definedoverlay networkShows how to create and use your own customizationsoverlay networkTo connect to the service. It is recommended to use it for services running in production.
  • willoverlay networkHow is it used for standalone containersoverlay networkCommunicate between separate containers on different Docker daemons.
  • Communication between containers and cluster services uses connectableoverlay networkEstablish communication between individual containers and cluster services. Docker 17.06 and later support this feature.

The preparatory work

These require that you have at least one single-node cluster, which means you have Docker started and Docker Swarm Init running on your host. You can also run the example on a multi-node cluster. The last example requires Docker 17.06 or higher.

Use the default Overlay Network

In this example, you launch a Alpine service and analyze the characteristics of the network from the perspective of individual service containers.

This tutorial does not cover the operating system-specific details of how to implement overlay Network, but rather focuses on how overlay Network works from a service perspective.

To prepare

This tutorial requires three physical or virtual Docker hosts that can communicate with each other and are all running a new installation of Docker 17.03 or higher. This tutorial assumes that all three hosts are running on the same network and that no firewalls are involved.

These hosts will be called manager, worker-1,worker-2. The manager host acts as both manager and worker, meaning it will run service tasks as well as manage swarm. Worker-1 and worker-2 will act only as workers.

If you don’t have three hosts, a simple solution is to set up three Ubuntu hosts on a cloud provider (such as Amazon EC2), all on the same network, and allow all communication with all hosts on that network (using security groups such as EC2), Then follow the installation instructions for Docker Engine-Community on Ubuntu.

demo

Create a swarm

At the end of this process, all three Docker hosts will be clustered and connected together using an overlay network called the Ingress.

1. On manager, initialize the swarm cluster. If the host has only one network interface, the –advertise-addr option is optional.

$ docker swarm init --advertise-addr=<IP-ADDRESS-OF-MANAGER>
Copy the code

Make a note of the printed text, as it contains the tokens that will be used to add worker-1 and worker-2 to the group. Storing tokens in a password manager is a good idea.

2. Join swarm on worker-1. If the host has only one network interface, the –advertise-addr option is optional.

$ docker swarm join --token <TOKEN> \
  --advertise-addr <IP-ADDRESS-OF-WORKER-1> \
  <IP-ADDRESS-OF-MANAGER>:2377
Copy the code

3. Join swarm on worker-2. If the host has only one network interface, the –advertise-addr option is optional.

$ docker swarm join --token <TOKEN> \
  --advertise-addr <IP-ADDRESS-OF-WORKER-2> \
  <IP-ADDRESS-OF-MANAGER>:2377
Copy the code

4. On Manager, list all nodes. This command can only be executed on the manager.

$ docker node ls

ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
d68ace5iraw6whp7llvgjpu48 *   ip-172-31-34-146    Ready               Active              Leader
nvp5rwavvb8lhdggo8fcf7plg     ip-172-31-35-151    Ready               Active
ouvx2l7qfcxisoyms8mtkgahw     ip-172-31-36-89     Ready               Active
Copy the code

You can also filter by role using the –filter flag:

$ docker node ls --filter role=manager

ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
d68ace5iraw6whp7llvgjpu48 *   ip-172-31-34-146    Ready               Active              Leader

$ docker node ls --filter role=worker

ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
nvp5rwavvb8lhdggo8fcf7plg     ip-172-31-35-151    Ready               Active
ouvx2l7qfcxisoyms8mtkgahw     ip-172-31-36-89     Ready               Active
Copy the code

5. List all the networks on manager, worker-1,worker-2, and note that they all now have an overlay network called ingress and a bridge network called docker_gwbridge. Only manager’s are shown here:

$ docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
495c570066be        bridge              bridge              local
961c6cae9945        docker_gwbridge     bridge              local
ff35ceda3643        host                host                local
trtnl4tqnc3n        ingress             overlay             swarm
c8357deec9cb        none                null                local
Copy the code

Docker_gwbridge connects the Ingress network to the Docker host’s network interface so that traffic can travel to and from the group manager and workers. If cluster services are created but no network is specified, they will be connected to the Ingress network. It is recommended to use a separate overlay network for each application or group of applications that will be used together. In the next procedure, you will create two overlay networks and connect services to each.

Create a service

1. On the manager, create a new overlay network called nginx-net:

$ docker network create -d overlay nginx-net
Copy the code

You do not need to create an overlay network on other nodes because the overlay network is created automatically when one of the nodes starts running a service task that requires the overlay network.

2. On Manager, create an Nginx service with 5 copies and connect to nginx-NET. The service will be open to port 80. All service task containers can communicate with each other without opening any ports.

Services can only be created on the manager

$ docker service create \
  --name my-nginx \
  --publish target=80,published=80 \
  --replicas=5 \
  --network nginx-net \
  nginx
Copy the code

The default publishing mode of the ingress, used when you do not specify a mode for the –publish option. This means that if you access port 80 on manager,worker-1, or worker-2, you will connect to port 80 on any of the five service tasks. Even if no task is currently running on the node you are visiting. If you want to publish ports in host mode, you can add mode=host to the –publish option. However, in this case you would use –mode global instead of –replicas=5, because only one service task can be bound to a given port on a given node.

3. Run docker service ls to monitor service startup progress, which may take a few seconds.

4. Check the nginx-net network on Manage,worker-1,worker-2. Remember that you don’t need to create it manually on worker-1 and worker-2, because Docker will do it for you. The output will be long, but you only need to pay attention to the Containers and Peers parts. Containers lists all of the service tasks (or individual Containers) connected from the mainframe to the Overlay network.

5. For the Manager, use docker service inspect my-nginx to inspect the service and note the information about the ports and endpoints used by the service.

6. Create a new network, nginx-net-2, and use this new network update service:

$ docker network create -d overlay nginx-net-2
Copy the code
$ docker service update \
  --network-add nginx-net-2 \
  --network-rm nginx-net \
  my-nginx
Copy the code

7. Run docker service ls to check that the service is updated and all tasks have been redeployed. Run Docker Network Inspect nginx-net to verify that no containers are connected to it. Run the same command for nginx-NET-2, noting that all service task containers are connected to it.

Overlay networks are automatically created on demand on sWARN working nodes and they are not automatically deleted

8. Clean up services and networks. In the manager, run the following command. The administrator will instruct the staff to delete the network automatically.

$ docker service rm my-nginx
$ docker network rm nginx-net nginx-net-2
Copy the code

Use user-defined overlay networks

To prepare

This tutorial assumes that the cluster is set up and that you are on a manager.

demo

1. Create a user-defined overwrite network.

$ docker network create -d overlay my-overlay
Copy the code

2. Use overlay network to start a service and publish port 80 to 8080 on the Docker host

$ docker service create \
  --name my-nginx \
  --network my-overlay \
  --replicas 1 \
  --publish published=8080,target=80 \
  nginx:latest
Copy the code

3. Run docker Network inspect my-Overlay to verify that the my-Nginx service task is connected to it. By looking at the Containers section.

4. Delete services and networks

$ docker service rm my-nginx

$ docker network rm my-overlay
Copy the code

Use overlay networking on individual containers

This example demonstrates DNS container discovery – specifically, how to use overlay networks to communicate between separate containers on different Docker daemons. The steps are as follows:

  • inhost1A swarm(manager) is used to initialize the node.
  • inhost2Swarm (as a working node)
  • inhost1To create a connectable overlay network (test-net).
  • inhost1And intest-netRun an interactive onalpineContainer (alpine1).
  • inhost2And intest-netRun an interactive, separablealpineContainer (alpine2).
  • inhost1On, from onealpine1In the session,ping alpine2

To prepare

For this test, you need two different Docker hosts that can communicate with each other. Each host must have Docker 17.06 or later and have the following ports open between the two Docker hosts:

  • TCP port 2377
  • TCP and UDP port 7946
  • UDP port 4789

An easy setup is to have two VMS (locally or on a cloud provider such as AWS), each with Docker installed and running. If you are using AWS or a similar cloud computing platform, the simplest configuration is to use a security group that opens all incoming ports between the two hosts as well as SSH ports from the client’S IP address.

This example calls two nodes in our group host1 and host2. This example also uses a Linux host, but the same commands are available on Windows.

demo

Swarm on host1, initialize the cluster (if prompted, use –advertise-addr to specify the IP address of the interface that communicates with other hosts in the cluster, for example, private IP address on AWS) :

$ docker swarm init
Swarm initialized: current node (vz1mm9am11qcmo979tlrlox42) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5g90q48weqrtqryq4kj6ow0e8xm9wmv9o6vgqc5j320ymybd5c-8ex8j0bc40s6hgvy5ui5gl4gy 172.31.47.252:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Copy the code

On Host2, follow the instructions to join the cluster:

$ docker swarm join --token <your_token> <your_ip_address>:2377
This node joined a swarm as a worker.
Copy the code

Docker swarm join will timeout if the node cannot join the cluster. To fix this issue, run Docker Swarm Leave –force on Host2, verify your network and firewall Settings, and try again.

2. On Host1, create an overlay network called test-net that can be connected:

$ docker network create --driver=overlay --attachable test-net
uqsof8phj3ak0rq9k86zta6ht
Copy the code

Notice the returned NETWORK ID- you’ll see it again when you connect to it from Host2.

3. On Host1, start an interactive container alPINE1 to connect to test-NET:

$ docker run -it --name alpine1 --network test-net alpine
/ #
Copy the code

4. On Host2, list the available networks — note that test-Net does not yet exist:

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
ec299350b504        bridge              bridge              local
66e77d0d0e9a        docker_gwbridge     bridge              local
9f6ae26ccb82        host                host                local
omvdxqrda80z        ingress             overlay             swarm
b65c952a4b2b        none                null                local
Copy the code

5.On host2, start a container (Alpine2) and connect to test-net in -d and (it) :

$ docker run -dit --name alpine2 --network test-net alpine
fb635f5ece59563e7b8b99556f816d24e6949a5f6a5b1fbd92ca244db17a4342
Copy the code

Automatic DNS container discovery applies only to unique container names.

6. On host2, verify that the test-net NETWORK has been created (with the same NETWORK ID as that on Host1):

 $ docker network ls
 NETWORK ID          NAME                DRIVER              SCOPE
 ...
 uqsof8phj3ak        test-net            overlay             swarm
Copy the code

7. On Host1, ping Alpine2 from the interactive terminal of Alpine1:

/ # ping -c 2 alpine2PING Alpine2 (10.0.0.5): 56 Data bytes 64 bytes from 10.0.0.5: seq=0 TTL =64 time=0.600 ms 64 bytes from 10.0.0.5: Seq =1 TTL =64 time=0.555 ms -- pine2 ping statistics -- 3 packets transmitted, 2 packets received 0% packet loss round-trip min/ AVG/Max = 0.555/0.577/0.600msCopy the code

The two containers communicate with the overlaid network connecting the two hosts. If you run another Alpine container on host2 instead of detached. You can ping Alpine1 from Host2ping (here we have added the option of automatic container cleanup):

$ docker run -it --rm --name alpine3 --network test-net alpine
/ # ping -c 2 alpine1
/ # exit
Copy the code

8. On Host1, close alpine1’s session (this will stop the container):

/ # exit
Copy the code

9. Clean up your containers and network You must stop and remove containers independently on each host because Docker daemons run independently and they are separate containers. You only need to remove the network on Host1, because test-net disappears when you stop Alpine2 on Host2.

On Host2, stop Alpine2, check that test-net has been deleted, and then delete Alpine2:

$ docker container stop alpine2
$ docker network ls
$ docker container rm alpine2
Copy the code

Delete alPINE1 and test-net from host1:

$ docker container rm alpine1
$ docker network rm test-net
Copy the code

Communicate between container and cluster services

To prepare

Docker 17.06 or higher is required for this example.

demo

In this example, you start two different AlPINE1 containers on the same Docker host and do some tests to see how they communicate with each other. You need to install and run Docker.

1. Open a terminal window. List the current network before doing anything else. If you’ve never added networking or initializing groups on this Docker daemon, you should see the following. You may see different networks, but you should at least see the following (network ids will vary) :

$ docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
17e324f45964        bridge              bridge              local
6ed54d316334        host                host                local
7092879f2cc8        none                null                local
Copy the code

Lists the default bridging networks, as well as host and None. The latter two are not fully fledged networks, but are used to start containers that connect directly to the network stack of the Docker daemon host, or containers that do not contain network devices. This tutorial will connect two containers to a bridge network.

2. Start two Alpine containers running Ash, which is Alpine’s default shell, not bash. The -DIT flag means to start separated containers (in the background), interactive (with the ability to type), and TTY (so that you can see the inputs and outputs). Because you are started separately, you will not connect to the container immediately. Instead, it prints the ID of the container. Because you did not specify any –network flag, the container will connect to the default bridge network.

$ docker run -dit --name alpine1 alpine ash

$ docker run -dit --name alpine2 alpine ash
Copy the code

Check that both containers are actually started:

$ docker container ls

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
602dbf1edc81        alpine              "ash"               4 seconds ago       Up 3 seconds                            alpine2
da33b7aa74b0        alpine              "ash"               17 seconds ago      Up 16 seconds                           alpine1
Copy the code

3. Check the bridge network to see which containers are connected.

$ docker network inspect bridge

[
    {
        "Name": "bridge"."Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10"."Created": "The 2017-06-22 T20:27:43. 826654485 z"."Scope": "local"."Driver": "bridge"."EnableIPv6": false."IPAM": {
            "Driver": "default"."Options": null,
            "Config": [{"Subnet": "172.17.0.0/16"."Gateway": "172.17.0.1"}},"Internal": false."Attachable": false."Containers": {
            "602dbf1edc81813304b6cf0a647e65333dc6fe6ee6ed572dc0f686a3307c6a2c": {
                "Name": "alpine2"."EndpointID": "03b6aafb7ca4d7e531e292901b43719c0e34cc7eef565b38a6bf84acf50f38cd"."MacAddress": "02:42:ac:11:00:03"."IPv4Address": "172.17.0.3/16"."IPv6Address": ""
            },
            "da33b7aa74b0bf3bda3ebd502d404320ca112a268aafe05b4851d1e3312ed168": {
                "Name": "alpine1"."EndpointID": "46c044a645d6afc42ddd7857d19e9dcfb89ad790afb5c239a35ac0af5e8a5bc5"."MacAddress": "02:42:ac:11:00:02"."IPv4Address": "172.17.0.2/16"."IPv6Address": ""}},"Options": {
            "com.docker.network.bridge.default_bridge": "true"."com.docker.network.bridge.enable_icc": "true"."com.docker.network.bridge.enable_ip_masquerade": "true"."com.docker.network.bridge.host_binding_ipv4": "0.0.0.0"."com.docker.network.bridge.name": "docker0"."com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}}]Copy the code

Near the top, information about the bridge network is listed, including the IP address (172.17.0.1) of the gateway between the Docker host and the bridge network. Under the Containers item, each connected container and its IP address information are listed (AlPINE1 172.17.0.2, AlPINE2 172.17.0.3).

4. The container runs in the background. Connect to Alpine1 using the Docker attach command.

$ docker attach alpine1

/ #
Copy the code

The prompt changes to # to indicate that you are the root user in the container. Use the IP addr show command to view alpine1’s network interface from the container:

# ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd Ff :ff:ff:ff:ff:ff :ff inet 172.17.0.2/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:2/64 scope link valid_lft forever preferred_lft foreverCopy the code

The first interface is the loopback device. Ignore that for now. Note that the IP address of the second interface is 172.17.0.2, which is the same address as Alpine1 shown in the previous step.

5. Inside Alpine1, ping Baidu.com to make sure you can connect to the Internet. -c 2 Indicates that the ping operation is limited to two times.

# ping -c 2 baidu.comPING baidu.com (172.217.3.174): 56 data bytes 64 bytes from 172.217.3.174: Seq =0 TTL =41 time=9.841 ms 64 bytes from 172.217.3.174: Seq =1 TTL =41 time= 9.997 ms ping statistics -- 3 packets transmitted, 3 packets received 0% packet loss round-trip min/ AVG/Max = 9.841/9.869/9.897 msCopy the code

7. Now try ping the second container. First, ping the IP address 172.17.0.3:

# ping -c 2 172.17.0.3

PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.094 ms

--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.086/0.090/0.094 ms
Copy the code

It worked. Next, try ping the Alpine2 container by container name. This will fail.

# ping -c 2 alpine2

ping: bad address 'alpine2'
Copy the code

7. Detach from Alpine1 without stopping it by using the detach sequence CTRL + P CTRL + Q (hold CTRL and type P followed by Q). If you wish, attach to Alpine2 and repeat steps 4, 5, and 6 there, replacing Alpine1 with Alpine2.

8. Stop and delete the container.

$ docker container stop alpine1 alpine2
$ docker container rm alpine1 alpine2
Copy the code

Keep in mind that it is not recommended to use the default bridge network for production.