Author: Small

Docker minimalist introduction

I. Introduction to Docker

Docker is an open source application container engine that allows developers to package their applications and dependencies into a portable image that can then be distributed to any popular Linux machine, as well as virtualization. Containers are completely sandboxed and have no interface with each other. Copy the codeCopy the code

As you can see from the above description, Docker is not a container, but an engine that manages containers.

Docker also says:

Build and Ship any Application Anywhere to copy codeCopy the code

To summarize: One build, run everywhere. With a single build, it can run anywhere, greatly simplifying application distribution, deployment, upgrade, and so on. Docker is a platform for application packaging and deployment, not a pure virtualization technology.

Here’s a look at docker’s architecture and how it works:

Ii. Docker architecture and working principle

Docker is a client-server (C/S) architecture. The Docker client only needs to make a request to the Docker server or daemon, and the server or daemon will do all the work and return the result. Docker provides a command line tool Docker and a set of RESTful apis. Copy the codeCopy the code

As shown in the figure below, Docker is divided into Docker daemon (Docker daemon) and client tool when running. Various Docker commands we use daily are actually the interaction between the client tool and Docker Daemon.

1. Client Indicates the Client

Docker users mainly interact with Docker through the client. For example, when the “Docker run” command is run, the Docker client receives the command and forwards it to the daemon for execution. The Docker Client can communicate with the Docker Daemon in the following ways: TCP ://host:port, Unix ://path_to_socket and fd://socketfd. The Docker Daemon receives and processes the request. When the Docker Client receives the returned request and performs corresponding simple processing, the complete life cycle of the Docker Client ends. (A complete request: send the request → process the request → return the result).

2. Docker daemon

Daemons are used to listen to Docker API requests and manage Docker objects, such as images, containers, networks, and data volumes. One daemon can also communicate with other daemons to manage Docker services.

3. The Container vessel

Containers are running instances of image creation. It can be started, started, stopped, and deleted. Each container is an isolated, secure platform. When a container is deleted, any changes to its state that are not stored in persistent storage disappear.

Writable Container When the container is started, a new writable layer is loaded onto the top of the image. This layer is often referred to as the “container layer”, and everything below it is called the “mirror layer”.



All changes to the container, whether adding, deleting, or modifying files, occur only in the container layer. Only the container layer is writable; all mirror layers below the container layer are read-only.

There may be many mirror layers, all of which are combined to form a unified file system. If there is a file with the same path in different layers, for example, / A, the upper-layer/A overwrites the lower-layer/A, that is, users can access only the upper-layer/A files. In the container layer, the user sees a superimposed file system.

To summarize, the container layer records changes made to the image. All image layers are read-only and cannot be modified by the container, so images can be shared by multiple containers.

4. The Images’ image

A Docker image is a read-only template that can be used to create Docker containers. In actual development, we usually use the Dockerfile file to generate the image. One instruction in the dockerfile corresponds to a layer in the image. When you change your Dockerfile or reconstruct the dockerfile, only the changed ones will be rebuilt. This is one of the reasons docker images are so lightweight, small and fast compared to other virtualization technologies.

Image layering Docker supports creating new images by extending existing ones. In fact, 99% of the images in Docker Hub are built by installing and configuring the required software in the base image.



As can be seen from the figure above, the new image is generated from the base image layer by layer. With each software installation, a layer is added to the existing image.

One of the biggest benefits of mirror layering is sharing resources. For example, if multiple images are built from the same base image, Docker Host only needs to save one base image on disk. At the same time, only one base image is loaded in memory to serve all containers. And each layer of the mirror can be shared.

If multiple containers share the same base image, when one container changes the contents of the base image, such as the file in /etc, the other containers’ /etc will not be modified, and the changes will only be limited to a single container. This is the container copy-on-write feature.

5. The Registry warehouse

Warehouse is the place where image files are stored centrally. There are often multiple warehouses on the Registry server, and each warehouse contains multiple images, each image has a different tag. Currently, the largest public repository is Docker Hub, which houses a huge number of images for users to download.

6. Volume Data Volume

After a container is deleted, any changes to its state that are not stored in persistent storage disappear. Volume is designed to solve this problem. Volume can persist data to our host and share data with the container. In simple terms, the directory of the host is mapped to the directory of the container. The data read and written by the application in the directory of the container is synchronized to the host, so that the data generated by the container can be persistent, such as our database container, so that the data can be stored on the real disk of our host.

Containers VS VMS

When it comes to containers, it is inevitable to compare them with traditional virtual machine technology. Let’s take a look at the architecture diagram of virtual machines and containers:

1. The virtual machine

A Virtual Machine is a complete computer system with full hardware system functions that is simulated by software and runs in a completely isolated environment. Everything that can be done in a physical computer can be done in a virtual machine. When creating a VM on a COMPUTER, use part of the physical machine’s hard disk and memory capacity as the VM’s hard disk and memory capacity. Each VM has its own CMOS, hard disk, and operating system, and can be operated as a physical vm. At present, the popular Virtual machine software includes VMware(VMware ACE), Virtual Box and Virtual PC, which can create multiple Virtual computers on the Windows system.

Advantages of virtual machines

  • Maximum utilization: Resources can be allocated to different VMS to maximize the utilization of hardware resources
  • Easy to expand: It is easier to expand applications on VMS than on physical machines.
  • Cloud services: Create different physical resources on VMS to quickly create cloud services.

Disadvantages of virtual machines

  • High resource usage: The VM occupies some memory and hard disk space. While it is running, other programs cannot use these resources. Even if the applications in the virtual machine actually use only 1MB of memory, the virtual machine still needs several hundred MB of memory to run.
  • Multiple redundant steps: A VM is a complete OPERATING system (OS). Some system-level operations, such as user login, cannot be skipped.
  • Slow startup: The VM takes as long as the OS starts. It may take a few minutes for the application to actually run.
2. The container

A container is an environment that runs applications independent of the operating system. It isolates and restricts application processes through Linux’s Namespaces and Cgroups technology. The function of a Namespace is isolation. It allows the application process to see only the world in the Namespace. The Cgroups function is to limit the host resources allocated to the process. But to the host, these “quarantined” processes are not much different from other processes. A container is just a special process running on a host, and multiple containers use the same host operating system kernel.

Advantages of containers

  • Easy to deploy: you only need to provide images to deploy in many places.
  • Low resource usage: the container occupies only needed resources and does not occupy unused resources. The VIRTUAL machine is a complete operating system, so it inevitably takes up all resources. In addition, multiple containers can share resources, and virtual machines have exclusive resources.
  • Version control: Tracking, querying, recording version information (application change history), rolling back versions, etc.
  • Fast startup: The application in the container is directly a process in the underlying system, rather than a process in the VM. So starting the container is like starting a process on the machine, rather than an operating system, which is much faster.
3. Container or VM

Container and virtual machine have their own advantages and disadvantages, and they should be complementary and cooperative rather than opposite to each other. We should choose different technologies based on actual usage scenarios. For example, if you need to test the compatibility of Windows system in Linux system, you need to choose virtual machine. Most Web applications are suitable for Docker.

Four,

Docker is a very popular technology at present. Most microservices and Web applications are applicable to Docker. At present, many companies’ applications have been docker-oriented. Performance will not be significantly reduced.

Author: small link: juejin.cn/post/701889… Source: Rare earth mining copyright belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please indicate the source.