The developer developed an e-commerce project, the Jar project contains Redis, MySQL, ES, Haddop and several other components. The developers tested themselves and submitted to the test for pre-production testing.

Test: this service of yours, I always appear unknown bug when carrying out unit test and data check! Would you like to take a look? Dev: How did you test it? Do you follow the instructions step by step? Test: absolutely according to the document ah! Dev: Did you reboot? Did you clear the cache? Is the code up to date? You use Chrome? Did you move something? Test: This.. This.. This.. I didn’t do anything!

The love-hate relationship between development and testing has officially begun!

1 Docker profile

1.1 Docker origin

Docker is a container engine developed based on Go language. Docker is an isolation layer between applications and systems. Applications often have strict requirements for the installed system environment, which can be cumbersome to configure when many servers are deployed. Docker allows applications to no longer care about the host environment. Each application is installed in the Docker image, and the Docker engine is responsible for running the Docker image that wraps the application.

The idea behind Docker is to make it easy for developers to load applications and dependencies into containers and deploy them anywhere. Docker has the following features.

  1. Docker containers are lightweight virtual technologies that consume fewer system resources.
  2. With Docker containers, collaboration between different teams (e.g., development, test, operations) is easier.
  3. Docker containers can be deployed anywhere, on any physical and virtual machine, even in the cloud.
  4. Because Docker containers are very lightweight, they are highly extensible.

1.2 Basic composition of Docker

Image:

A Docker image is like a target through which container services can be created, which can be simply understood as classes in a programming language.

Container:

Docker makes use of container technology to independently run one or a group of applications. Containers are created through images, and basic commands such as start, stop and delete can be executed in containers. Finally, services or projects are run in containers, which can be understood as instances of classes.

Repository:

The warehouse is where the images are stored! Repositories are divided into public and private repositories, similar to Git. In general, we use the domestic Docker image to accelerate.

1.3 the VM with Docker

* * * *

The virtual machine:

Traditional VIRTUAL machines need to simulate the entire machine, including hardware. Each VIRTUAL machine needs its own operating system. Once a VIRTUAL machine is started, all the resources allocated to it will be occupied. Each virtual machine includes applications, necessary binaries and libraries, and a complete user operating system.

Docker:

Container technology is to share hardware resources and operating system with our host to achieve dynamic allocation of resources. Containers contain applications and all their dependencies, but share the kernel with other containers. Containers run as separate processes in user space in the host operating system.

The comparison study Container (Container) VM (Virtual machine)
startup Second level Minutes of class
Running performance Close to the native The loss
Disk usage MB GB
The number of Hundreds of thousands of Generally dozens of sets
Isolation, The process level System level
The operating system Only support Linux Almost all
Degree of encapsulation Only package the project code and dependencies to share the host kernel Complete operating system

1.4 Docker with the conversation

DevOpsA collective term for a set of processes, methods, and systems that facilitate communication, collaboration, and integration between development (application/software engineering), technical operations, and quality assurance (QA) departments.

DevOps is a combination of two traditional roles, Dev(Development) and Ops(Operations). Dev is responsible for Development and Ops is responsible for deployment, but Ops is responsible for deployment because Dev doesn’t have enough knowledge of the application. There is a clear gap between services that don’t know how to deploy and operate, and DevOps is designed to bridge that gap. What DevOps does is ops-biased; But the people who do this work are partial to Dev. To put it plainly, it means that someone who knows Dev can do Ops. And Docker is for DevOps.

1.5 Docker k8s

K8s full name is Kubernetes, it is based on the container cluster management platform, is a tool to manage the full life cycle of applications, from the creation of applications, application deployment, application services, expansion and reduction of applications, application updates, are very convenient, and can do self-healing fault, such as a server hang, Services on this server can be automatically scheduled to run on another host without manual intervention. K8s is built on Google’s own strong practice app and has overtaken Docker’s Swarm in market share.

If you have a lot of Docker containers to start, maintain, and monitor, go to K8S!

1.6 hello world

docker run hello-worldThe general flow chart of

2 Common Docker instructions

Official documents:

Docs.docker.com/engine/refe…

3 Operating Principles of Docker

Docker only provides a running environment. Unlike VM, it does not need to run an independent OS. The kernel of the system in the container is common to the kernel of the host. Docker containers are essentially host processes. For the Docker project, its core principle is actually to do the following operations for the user process to be created:

  1. The Linux Namespace configuration is enabled.
  2. Sets the specified Cgroups parameter.
  3. Switch the Root directory of the process (Change Root), and use pivot_root for system invocation preferentially. If the system does not support it, use chroot.

3.1 Process Isolation of the Namespace

The Linux Namespaces mechanism provides oneProcess resource isolationSolution. System resources such as PID, IPC, and Network are not global but belong to a specific systemNamespace. eachnamespaceUnder resources for othernamespaceAll resources are transparent and invisible. Two processes whose ids are 0, 1, and 2 can exist in the system at the same time. They do not conflict because they belong to different namespaces.

PS:LinuxThere are 6 species in the inner corenamespaceIsolated system calls, as shown in the figure below.

3.2 CGroup Allocates resources

Docker throughCgroupTo control the quota of resources used by the container and is issued once this quota is exceededOOM. Quotas include CPU, memory, and disk quotas, covering common resource quotas and usage control.

Cgroup is short for Control Groups, LinuxThe kernelA mechanism to limit, record, and isolate physical resources (such as CPU, memory, disk I/O) used by process groups is used by many projects such as Linux Container (LXC) and Docker to implement process resource control. Cgroup itself is an infrastructure that provides the functions and interfaces for grouping process management. Specific resource management functions such as I/O and memory allocation control are realized through this function. These specific resource management functions are called the Cgroup subsystem.

3.3 Chroot and Pivot_root File systems

The chroot(change root file system) command is used to change the root directory of a process to a specified location. For example, we now have a $HOME/test directory that we want to use as the root of a /bin/bash process.

  1. First, create a HOME/test/{bin,lib64,lib}
  2. Copy the bash command to the bin path of the test directory cp -v /bin/{bash,ls} $HOME/test/bin
  3. Copy all so files required by the bash command to the lib path corresponding to the test directory
  4. Run the chroot command to tell the operating system that we will use /home/test/bin/bash

The chroot process returns the contents of the $HOME/test directory. This is how Docker implements the container root directory. To make the container’s root look more realistic, it is common to mount a full operating system file system, such as the Ubuntu16.04 ISO, under the container’s root. After the container is started, you can view all the directories and files in Ubuntu 16.04 by executing ls/in the container.

A file system mounted at the root of a container to provide an isolated post-execution environment for container processes is called a container image. The technical name is rootfs (root file System). So one of the most common rootfs will contain some directories and files like this:

$ ls /
bin dev etc home lib lib64 mnt opt proc root run sbin sys tmp usr var
Copy the code

Chroot changes only the/of the current process. Pivot_root changes the/of the current mount namespace. Pivot_root can be considered an improved version of Chroot.

3.4 consistency

Because rootFS packages not just applications, but files and directories for the entire operating system, it means that the application and all the dependencies it needs to run are packaged together. With the ability of container images to package the operating system, this most basic dependency environment has finally become part of the application sandbox. This gives the container its so-called consistency:

Whether locally, in the cloud, or on a machine anywhere, the user simply unpacks the packaged container image and the complete execution environment that the application needs to run is recreated.

3.5 UnionFS Union file system

How can rootFs be efficiently reusable? Docker introduces the concept of layer in the design of image. This means that each step the user takes to create an image generates a layer, which is an incremental rootFS. Before we get to layering, let’s talk about one important thing: federated file systems.

UnionFS is a layered, lightweight, and high-performance file system that supports layer upon layer of file system changes as a single commit while mounting different directories to the same virtual file system. For example, fruit fruits and vegetable vegetables include apples and tomatoes, and vegetables include carrots and tomatoes:

$tree. ├─ fruits │ ├─ apple │ ├─ tomatoes ├─ vegetables │ ├─ tomatoesCopy the code

Then mount the two directories to a common directory, MNT, using a joint mount:

$ mkdir mnt
$ sudo mount -t aufs -o dirs=./fruits:./vegetables none ./mnt
Copy the code

If you look at the contents of directory MNT, you can see that the files from directory FRUITS and vegetables are merged together:

$tree./ MNT./ MNT ├── apple ├── tomatoCopy the code

You can see that in the MNT directory there are three files, Apple, Apple, carrot, and Tomato. The fruit and vegetable directories are union into the MNT directory.

 $ echo mnt > ./mnt/apple
 $ cat ./mnt/apple
 mnt
 $ cat ./fruits/apple
 mnt
Copy the code

You can see the content of./ MNT /apple has been changed, and the content of./fruits/apple has been changed.

 $ echo mnt_carrots > ./mnt/carrots
 $ cat ./vegetables/carrots
 old
 $ cat ./fruits/carrots
 mnt_carrots
Copy the code

/vegetables/ results did not change. Instead, a./fruits/ results file appeared and kept the results we had in./ MNT/results.

Conclusion:

When you mount aufs, you do not set permissions for vegetables and fruits. By default, the first directory on the command line is readable and writable, and the rest is read-only. If there are duplicate file names, the earlier one on the mount command line has a higher priority.

3.6 layer hierarchical

After the joint file system, we will talk about the layer in Docker. Images can be inherited by layer. Based on the base image (no parent image), users can make various specific application images. Different Docker containers can share some basic file system layers, while adding their own unique change layer, greatly improving storage efficiency.

Docker uses a federated file system called AUFS (Anothe rUnionFS). AUFS supports different read and write permissions for each member directory.

  1. Rw indicates writable and readable read-write.
  2. Ro means read-only, if you do not mean permission, then ro is the default except for the first one. For the RO branch, it will never receive a write operation, nor will it receive a search for whiteout.
  3. Rr stands for real-read-only. Unlike read-only, rr marks branches that are inherently read-only, so AUFS can improve performance by, for example, not setting inotify to check for notification of file changes.

What if we want to modify ro layer files? Because RO is not allowed to modify ah! The normal RO layer in Docker also has a WH capability. We need to whiteout the files in the ro directory. AUFS whiteout is implemented by creating corresponding Whiteout hidden files in the upper writable directory. For example, we have three directories and files as follows:

$tree. ├─ fruits │ ├─ apple │ ├─ tomatoes │ ├─ test # directory empty ├─ vegetables │ ├─ tomatoesCopy the code

Execute as follows:

 $ mkdir mnt
 $ mount -t aufs -o dirs=./test=rw:./fruits=ro:./vegetables=ro none ./mnt
 $ ls ./mnt/
 apple  carrots  tomato
Copy the code

/ MNT /apple: rm./ MNT /apple: rm./ MNT /apple: rm.

 $ touch ./test/.wh.apple
 $ ls ./mnt
 carrots  tomato
Copy the code

For AUFS mirror of base layer is placed in/var/lib/docker AUFS/diff directory, and then through the query/sys/fs/AUFS view are jointly mount together layers of information, Multiple base layers are eventually jointly mounted in /var/lib/docker-aufs-mnt, where a finished product is stored.

Docker currently supports federated file systems including OverlayFS, AUFS, Btrfs, VFS, ZFS and Device Mapper. Overlay2 storage driver is recommended. Overlay2 is currently the default storage driver for Docker, which was AUFS previously.

3.6.1 read-only layer

Take Ubuntu as an example. When executing docker Image Inspect Ubuntu: Latest, the bottom four layers of the container rootfs will correspond to the four layers of the Ubuntu: Latest image. They are all mounted read-only (ro+ WH), each containing an incremental portion of the Ubuntu operating system, and the four layers are combined to form a finished product.

3.6.2 Read-write layer

The top layer of rootfs has rW permissions and is empty until a file is written. And once you write in the container, the content that you modify will appear incrementally in the layer. If you want to delete a file from a read-only layer, what can you do? This problem has been explained above.

The top read-write layer is dedicated to storing the increments generated by changes to rootfs, whether they are additions, deletions, or changes. After using the modified container, we can also use the Docker Commit and push instructions to save the modified read-write layer and upload it to the Docker Hub for others to use. And the contents of the original read-only layer will remain unchanged, which is the benefit of incremental rootfs.

3.6.3 init layer

It is a layer ending in -init, sandwiched between the read-only layer and the read-write layer. The Init layer is an internal layer created by the Docker project to store information such as /etc/hosts.

The reason for this layer is that these files are originally part of the read-only Ubuntu image, but users often need to write specified values such as hostname when they start the container, which needs to be modified in the read-write layer. However, these changes usually only apply to the current container, and we don’t want the init layer to be committed along with the read-write layer when we perform a Docker Commit.

Finally the six layers are combined to form a complete oneUbuntuThe operating system is used by the container.

4 Docker network

Docker uses Linux Namespaces to isolate resources, such as PID Namespace to isolate processes and Mount Namespace to isolate file systems. Network Namespace Specifies the isolated Network. A Network Namespace provides an independent Network environment (including Network cards, routes, and Iptable rules) that is isolated from other Network namespaces. A Docker container usually allocates an independent Network Namespace.

When you install Docker, executing Docker Network LS will automatically create three networks.

[root@server1 ~]$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
0147b8d16c64        bridge              bridge              local
2da931af3f0b        host                host                local
63d31338bcd9        none                null                local
Copy the code

When we use docker run to create a Docker container, we can use the — NET option to specify the network mode of the container, Docker can have the following 4 network modes:

Network mode Use attention
host Share the network with the host
none Not configuring the Network
bridge Docker default, can also be created
container Container network connectivity, containers directly connected, rarely used

4.1 the Host mode

It is equivalent to the bridge mode in Vmware. When the container is started in host mode, the container will not virtualize its network card or configure its OWN IP address, but use the IP address and port of the host. But other aspects of the container, such as the file system and process list, are still isolated from the host.

4.2 Container pattern

The Container mode specifies that a newly created Container shares a Network Namespace with an existing Container, rather than with a host. A newly created container does not create its own network adapter or configure its own IP address. Instead, it shares IP address and port range with a specified container. Also, the two containers are isolated from each other except for the network aspects, such as file systems, process lists, and so on. The processes of the two containers can communicate through the LO network device.

4.3 None mode

The None mode places the container on its own network stack without any configuration. In effect, this mode turns off the networking capabilities of the container, which does not need networking (for example, a batch task that only needs to write a disk volume).

4.4 Bridge model

Bridge mode is the default Network setting of Docker. This mode allocates Network Namespace and sets IP address for each container. When Docker Server starts, a name named Docker Server is created on the hostdocker0The Docker container started on this host is connected to the virtual bridge. A virtual bridge works like a physical switch. All containers on a host are connected to a layer 2 network through a switch.

Docker selects a different IP address and subnet from the private IP network segment defined by RFC1918 to assign to docker0. Containers connected to docker0 select an unused IP address from the subnet to use. Docker uses 172.17.0.0/16 and assigns 172.17.0.1/16 to the docker0 bridge. Used as a virtual network card on the host.

The network configuration process consists of three steps:

  1. Create a veth pair of virtual nics on the host. Veth devices always come in pairs and form a data channel, with data coming in from one device and coming out from the other. So veTH devices are often used to connect two network devices.
  2. Docker puts one end of the Veth pair device in a newly created container and names it eth0. Put the other end on the host, name it veth65f9 or something like that, and add the network device to the Docker0 bridge, which can be viewed with the BRCTL show command.
  3. Assign an IP address to the container from the Docker0 subnet and set the DOCker0 IP address to the container’s default gateway.

Container communication in Bridge mode

  1. Container access outside

Assume that the host NIC is eth0, IP address 10.10.101.105/24, and gateway 10.10.101.254. Ping Baidu (180.76.3.151) from the 172.17.0.1/16 container on the host. The packet is sent from the container to its default gateway, Docker0. When the packet arrives at Docker0, the host’s routing table is queried and the packet is sent from the host’s eth0 to the host’s gateway, 10.10.105.254/24. The packet is then forwarded to and sent from eth0. The Iptable rule takes effect, replacing the source address with the address of eth0. The Docker container is invisible to the outside world as the packet is sent from 10.10.101.105.

  1. External access container

Create a container and map port 80 of the container to port 80 of the host. When we access the destination port 80 received by eth0, the Iptable rule performs DNAT and sends the traffic to 172.17.0.2:80, which is the Docker container created above. So, the outside world only needs to access 10.10.101.105:80 to access the services in the container.

4.5 – the link

Once the container is created, we want to ping it by its name. To do this, use –link as follows:

Docker run -d -p --name linux03 --link linux02 Linux docker exec it linux03 ping linux02 Docker exec -it linux02 ping Linux03 Cannot be pinged.Copy the code

If you go back to /etc/hosts for Linux03, you’ll see that it’s essentially a host mapping.

172.17. 03. linux03 12Ft4tesa # is the same as the Windows host file, only with the address bindingCopy the code

4.6 build Bridge,

The command we started directly (using — NET Bridge by default, save) is our Docker0. These two things are equivalent.

docker run -d -P --name linux01 LinuxSelf
docker run -d -P --name linux01 --net bridge LinuxSelf
Copy the code

Docker0 does not support domain name access by default. If we use a custom network, docker has helped us maintain the corresponding relationship, can achieve domain name access.

The driver bridge network mode is defined as: bridge # --subnet192.168. 0. 0/16Define a subnet. The range is:192.168. 02. ~ 192.168255.255. 
# --gateway 192.168. 01.Set the subnet gateway to:192.168. 01. 
docker network create --driver bridge --subnet 192.168. 0. 0/16 --gateway 192.168. 01. mynet
Copy the code

The following

docker run -d -P --name linux-net- 01 --net mynet LinuxSelf
docker run -d -P --name linux-net. --net mynet LinuxSelf
docker exec -it linux-net- 01 ping linux-net.Result OK docker execit linux-net- 01 ping linux-net.Result # OKCopy the code

5 Visual Interface

5.1 Portainer

Portainer is a graphical management tool of Docker, providing status display panel, rapid deployment of application templates, and basic operations of container mirroring network data volumes (including uploading and downloading images, Swarm cluster and service centralized management and operation, login user management and control and other functions. The function is very comprehensive, can basically meet all the needs of small and medium-sized units for container management.

5.2 DockerUI

DockerUI is based on the Docker API and provides most functions equivalent to the Docker command line, supporting container management and image management. However, a fatal drawback of DockerUI is that it does not support multiple hosts.

5.3 Shipyard

Shipyard is a system that integrates the management of Docker containers, images and Registries, which can simplify the management of docker container cluster that spans multiple hosts. Through the Web user interface, you can browse information such as how many processors and memory resources your container is using, which containers are running, and check the event logs on all clusters.

6 Docker Study Guide

Also want to write common instruction, Dockerfile, Docker Compose, Docker Swarm, but feel or a switch, the official documentation he’s sweet! Several study guides are recommended.

  1. The official document: docs.docker.com/engine/refe…
  2. From beginning to Practice: github.com/yeasy/docke…
  3. Online tutorial: vuepress.mirror.docker-practice.com
  4. PDF: Reply 1412