The advent of Docker was revolutionary, changing the way we develop and deploy projects. This article focuses on one aspect of the community’s ongoing efforts to standardize container technology: networking.

Network layer: VIRTUAL machine vs. container technology

Before we explore the different container network standard models, let’s compare virtual machines and Docker from a network perspective.

A VM is a set of operating system-level virtualization. It also creates virtual Network interface cards (nics) that connect to real physical machine nics.

Docker is essentially a process that is managed by the Container Runtime and shares a Linux kernel. So, containers have more flexible network solutions:

  • It shares the same network interface and network namespace with the host
  • Connect to another virtual network using the network Interface and network namespace of this virtual network

Container network Design

Early container network designs focused on how to connect containers on a host so that they could interact with the outside world.

In “host” mode, a container running on a host uses the host’s network namespace and the host’s IP address. To expose the container, the container occupies a port on the host through which it communicates with the outside world. Therefore, you need to manually maintain port allocation and do not have different container services running on the same port.

The “bridge” mode is an improvement on the “host” mode. In this mode, containers are allocated to a virtual LAN, and the NETWORK namespace obtains the IP address assigned to it. Since the IP address is independent, the problem that different container services cannot run on the same port in “host” mode is solved. One problem is that if the container wants to communicate with the outside world, it still needs to use the HOST IP address. NAT is used to translate host-ip:port to private-ip:port. This part of the NAT rule representation is maintained using Linux Iptables, which affects performance to some extent (though not much).

Neither of the above patterns solves one problem: the multi-host network solution.

Comparison of CNI and CNM

  • CNM: Docker (the company behind Docker Container Runtime) proposed.
  • CNI: CoreOS.

Kubernetes did not choose to create another network independently, but chose CNI as its own network plug-in. (See this official explanation for Why Kubernetes doesn’t use libnetwork). The most important thing that K8S does not use CNM is that CNM is closely associated with the Container Runtime to some extent and cannot be decoupled. After k8S, many projects chose CNI between CNM and CNI.

Here is a detailed comparison of the two network models:

  1. CNM

CNM API consists of two parts: IPAM plug-in and network plug-in. The IPAM plug-in is responsible for creating/deleting Address pools, allocating network addresses, and the Network plug-in is responsible for creating/deleting networks, and allocating or revoking IP addresses of containers. In fact, BOTH plug-ins can implement all apis, and you can choose to use IPAM, Network, or BOTH. However, the Container Runtime uses different plug-ins in different situations, which introduces complexity. Also, CNM requires a distributed storage system to store network configuration information, such as ETCD.

  1. CNI

CNI exposes the interface for adding and removing containers from a network. CNI uses a JSON configuration file to hold network configuration information. Unlike CNM, CNI does not require an additional distributed storage engine.

conclusion

CNI has been embraced by many open source projects, such as K8S, Memos, and Cloud Foundry. It is also recognized by Cloud Native Computing Foundation. With a number of tech tycoons behind CNCF, it is predictable that CNI will become the standard for container networks of the future.