This paper aims to let readers have a rough understanding of the whole docker system and have some understanding of the operating mechanism of Docker. To dig deeper, dig deeper into Linux.
1. Docker
1.1 What is a Docker
Docker is an open source, Linux-based container technology engine that unifies the API that quarantined applications use to access the core of the system. Trying to solve the developer’s problem of the century can run on my machine.
The front end students can view the image as NPM package and the warehouse as NPM warehouse. It’s easier to understand.
1.2 Why Docker is used
Docker is a reduced version similar to virtual machine technology. Due to the long startup process of virtual machine and the running process of hardware after virtualization, it does not fit the physical machine well. A typical example is mobile terminal development, which takes a long time to start virtual system.
We often start a virtual machine only to isolate an application, but the creation of a virtual machine occupies a complete set of system resources (guest OS), which is overqualified and closely related to the cost.
While Docker emerged with the update of Linux functions,Docker essentially only isolates applications and shares the current system core.
The following figure shows the comparison between virtual machine and Docker architecture:
The following figure shows the function comparison of container VMS:
In this way, Docker can start in seconds, because Docker skips kernel init and uses the current system core. However, it also has drawbacks. For example, The function of virtual machine live migration is not well done by Docker.
Docker can be used to quickly build and configure the application environment, simplify operations, ensure the consistency of the running environment, “run everywhere once compiled”, application-level isolation, elastic scaling, and rapid expansion.
1.3 Basic Concepts of Docker
1.3.1 mirror
A mirror is a special file system that provides programs, libraries, resources, and configuration files required by the container runtime, as well as configuration parameters (such as anonymous volumes, environment variables, and users) prepared for the runtime. The image does not contain any dynamic data and its contents are not changed after the build.
The Union File System provides a read-only template for applications to run. It can provide only one function, or multiple images can be superimposed to create multiple function services.
1.3.2 container
Images simply define what is needed to isolate the application from running, and containers are the processes that run these images. Inside the container, a complete file system, network, process space, and so on are provided. It is completely isolated from the external environment and will not be disturbed by other applications.
The container must be read or written using **Volume**, or the storage environment of the host. After the container is restarted or shut down, the data inside the running container will be lost. Each time the container is started, a new container is created through the image.
1.3.3 warehouse
A Docker repository is a centralized repository for image files. Once the image is built, it can be easily run on the current host. However, if the image needs to be used on other servers, we need a centralized service to store and distribute the image. The Docker Registry is such a service. Repository and Registry are sometimes confused, but not strictly distinguished. The concept of a Docker repository is similar to Git, and the registry server can be understood as a hosting service like GitHub. In fact, a Docker Registry can contain multiple repositories. Each Repository can contain multiple tags, and each Tag corresponds to an image. So, the mirror repository is a place where Docker centrally stores image files similar to the code repository that we used to use.
Typically, a repository contains images of different versions of the same software, and labels are often used to match versions of the software. We can specify which version of this software is the mirror by using the format < repository >:< tag >. If no label is given, latest is used as the default label.
Warehouses can be divided into two forms:
public
(Public warehouse)private
(Private warehouse)
1.3.4 Docker client
Docker client is a generic name used to make requests to a specified Docker Engine and perform corresponding container-managed operations. It can be either a Docker command-line tool or any client that follows the Docker API. At present, the Docker client maintained in the community is very rich, including C#(support Windows), Java, Go, Ruby, JavaScript and other common languages, and even WebU client written using Angular library. Enough to meet the needs of most users.
1.3.5 Docker Engine
Docker Engine is the most core background process of Docker. It is responsible for responding to requests from Docker Client, and then translating these requests into system calls to complete container-managed operations. The process starts an API Server in the background that receives requests sent by the Docker Client. The received request is dispatched through a route within the Docker Engine, and the specific function executes the request.
2. The actual combat Docker
# # 2.1 installation Docker
All environments in this article run under centos7.
First, remove all older versions of Docker.
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
Copy the code
If it’s a new environment, you can skip this step.
For some domestic reasons, it is most likely that docker-CE cannot be installed according to the official website. Therefore, we need the domestic mirror to speed up the installation. Let’s speed up the installation with Aliyun.
# Step 1: Install the necessary system tools
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: Add software source information
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: Update and install docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: Start Docker service
sudo service docker start
Copy the code
After the installation is complete, you can run the Docker version to check whether the installation is successful.
➜ ~ docker version
Client: Docker Engine - Community
Version: 19.03.3
API version: 1.40
Go version: go1.12.10
Git commit: a872fc2f86
Built: Tue Oct 8 00:58:10 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.3
API version: 1.40 (minimum version 1.12)
Go version: go1.12.10
Git commit: a872fc2f86
Built: Tue Oct 8 00:56:46 2019
OS/Arch: linux/amd64
Experimental: falseContainerd: Version: 1.2.10 GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339 runc: Version: 1.0.0 rc8 + dev GitCommit: 3 e425f80a8c931f88e6d94a8c831b9d5aa481657 docker - init: Version: 0.18.0 GitCommit: fec3683Copy the code
Get a mirror image
Now we need to pull an Nginx image and deploy an Nginx application.
➜ ~ docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
68ced04f60ab: Pull complete
28252775b295: Pull complete
a616aa3b0bf2: Pull complete
Digest: sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
Copy the code
After the pull is complete, use docker image ls to view the list of current Docker local images.
➜ ~ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 6678c7c2e56c 13 hours ago 127MB
Copy the code
Re-running the same command docker pull nginx updates the local image.
Run a Docker container
Create a shell script file and write the following:
docker run \
The container is restarted after the container is stopped.
# no: The container exits without restarting
# on-failure: restart when the container fails to exit (return value is non-zero)
# always: Always restarts when the container exits
--restart=always \
# specifies that docker is running in the background, if not -d, after executing this command
The docker container will also be returned when you exit the command line
-d \
Bind host port number to container port
-p 8080:80 \
# specify the exposed port of the container, that is, the exposed port of the modified mirror
--expose=80 \
Map the host directory to
-v /wwwroot:/usr/share/nginx/html \
You can use the container name to manage the container. The links feature uses the name
--name=testdocker \
Which image should be used to initialize the container
nginx:lastest
Copy the code
We need to make it clear that docker’s container network is isolated from the host. Unless the container network mode is specified to rely on the host, it cannot be directly accessed.
Now run the script, open a browser, and type http://ip:8080 to see the application running with the Nginx image.
More orders, please refer to: docs.docker.com/engine/refe…
2.3.1 Command Parameters Brief edition
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
02.
03. -d, --detach=falseSpecifies whether the container runs in the foreground or background. The default isfalse
04. -i, --interactive=falseEnable STDIN for console interaction 05. -t, --tty=falseAssign a TTY device that can support terminal login. Default isfalse
06. -u, --user=""User 07 of the specified container.-a, --attach=[] Login container (must be docker run-d-w, --workdir=""-c, --cpu-shares=0 Sets the CPU weight of the container. In CPU sharing scenarios, use 10.-e, --env=[] Specifies the environment variable that can be used in the container 11. -m, --memory=""-p, --publish-all=false-p, --publish=[] Specifies the exposed port of the container. -h, --hostname=""-v, --volume=[] Mount storage volumes to a directory in the container 16. --volumes-from=[] Mount volumes from other containers to a directory in the container 17. 18. The permissions list as bellow: http://linux.die.net/man/7/capabilities - cap - drop = [] remove permissions, permissions list as bellow: http://linux.die.net/man/7/capabilities 19. --cidfile=""After running the container, write the container PID value to the specified file, a typical monitoring system usage 20. --cpuset=""21. --device=[] Adds a host device to the container. -- DNS =[] DNS server of the specified container 23. --dns-search=[] DNS search domain name of the specified container and write it to /etc/resolv.conf file 24. --entrypoint=""--env-file=[] Specifies the environment variable file, the file format is one environment variable per line 26. --expose=[] specifies the container exposed port, that is, modify the image exposed port 27. --lxc-conf=[] Specifies the configuration file of the container. Use 29. --name= only when --exec-driver= LXC is specified""Specify the container name, which can be used for subsequent container management. The links feature uses the name 30. --net="bridge"Container network Settings: Host :NAME_or_ID >// Use the network of other containers. Share network resources such as IP addresses and ports. 34. None Uses its own network (similar to --net=bridge), but does not configure itfalse36. --restart= If the privileged container has all capabilities"no"No: the container is not restarted when it exits 38. On-failure: the container is restarted when it exits due to a failure (return value is non-zero) 39. Always: the container is always restarted when it exits 40falseDocker run is not supported-d--sig-proxy=trueThe setting is accepted and processed by the agent, but SIGCHLD, SIGSTOP, and SIGKILL cannot be brokeredCopy the code
2.4 Access to containers
We can use docker exec it [docker container id] /bin/bash to access the running container.
To exit the container, there are two ways:
- Enter directly on the command line
exit
quit - Use shortcut keys
ctrl+P Q
Will quit
Either way, you can exit the container and keep it running in the background.
##2.5 Customize a mirror Dockerfile
Dockerfile is divided into four parts: basic image information, maintainer information, image operation instructions and container startup instructions.
Here I use a simple Node file to launch the Dockerfile of the development environment.
# 1. Set the base image of the source
FROM node:12.0
# specify the working directory for subsequent RUN, CMD, ENTRYPOINT directives
WORKDIR /workspace
RUN the following command from the directory specified by WORKDIR last time
RUN npm install --registry=https://registry.npm.taobao.org
Initialize exposed 8080 8001 8800 port number
The following port numbers can also be exposed during Docker run
EXPOSE 8080
EXPOSE 8001
EXPOSE 8800
Docker run -it /bin/bash: docker run -it /bin/bash
The instruction that is not overridden at all is ENTRYPOINT
CMD ["npm"."run"."dev-server"]
Copy the code
Save the configuration and exit. Run docker build -t nodeApp :v1.0. Notice the last one. Represents the current directory.
After the run is complete, use the Docker image ls to check whether the image has been compiled.
At this time, some people may have a question, is every time need NPM install installation file?
In fact, if your Node application package does not change and your image is specifically developed for this application, consider appending node_modules to the Docker image with the ADD command. (This is rarely done in real life, since external maps will overwrite directories if they are rolled in. This is just to demonstrate apending files like images.)
For more instruction sets, see:
Chinese version: yeasy. Gitbooks. IO/docker_prac…
Official English: docs.docker.com/engine/refe…
##2.6 Multi-container startup: docker-compose
Docker-compose requires a separate installation.
Let’s imagine a scenario where we start a front-end project. You need to start Nginx to run the foreground project, start a database to record data, and ensure the integrity of the entire application. That’s where Docker-compose comes in.
Docker-compose is composed of two types:
- Service (
service
) : an application container that can actually run multiple instances of the same image. - Project (
project
) : A complete business unit consisting of a set of associated application containers.
Go back to the directory where we created the Dockerfile, write a docker-comemess. yml file, and configure multiple containers.
version: '1'
services:
web:
build: .
ports:
- "8080:80"
volumes:
- /wwwroot:/usr/share/nginx/html
redis:
image: "redis:alpine"
Copy the code
Docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up
Visit http://ip:8080 to see the same page as before.
More reference: docs.docker.com/compose/
2.7 the network
Due to isolation, you cannot access the Docker container on the host directly from the extranet. So we need to bind the host ports to the container.
2.3 describes how to bind ports to export container ports. So let’s look at container interconnection here.
Run the command to create a Docker network
$ docker network create -d bridge my-net
Create two containers to join the my-Net network$ docker run -it --rm --name busybox1 --network my-net busybox sh $ docker run -it --rm --name busybox2 --network my-net busybox sh# Next we go to BusyBox1
$ docker exec -it busybox1 /bin/bash
Ping another container to see its IP address$root@busybox1:ping busybox2 ping busybox2 (172.19.0.3): 56 data bytes 64 bytes from 172.19.0.3: Seq =0 TTL =64 time=0.072 ms 64 bytes from 172.19.0.3: seq=1 TTL =64 time=0.118 msCopy the code
3. Expand your knowledge
3.1 Docker principle
Docker is written in the Go language and uses a series of features provided by the Linux kernel to achieve its function.
A system that can execute Docker has two main parts:
- Core components of Linux
- Docker-related components
The core Linux module functions used by Docker include the following:
- Cgroup – Used to allocate hardware resources
- Namespace – Separates the execution space of different Containers
- AUFS(chroot) – Used to create file systems for different containers
- SELinux – Used to secure the Container network
- Netlink – Used to communicate trips between different containers
- Netfilter – Establishes network firewall packet filtering based on the Container port
- AppArmor — Protects networking and execution security for containers
- Linux Bridge – Enables different Containers or containers on different hosts to communicate
3.2 MAC Window Running Docker principle
Run Linux using a virtual machine, and then run Docker Engine in Linux. Run the Docker Client on the machine.
3.3 Docker cannot be started using CMD [‘node’]
Front-end classmate focus
CMD [‘node’,’app.js’] as default startup Node.js was not designed to run as PID 1 which leads to unexpected behaviour when running Inside of Docker. “. The image below is from github.com/nodejs/dock… .
This problem involves the Linux operating mechanism. Simply put, a Linux process whose PID is 1 is a system daemon and will receive all orphan processes. And send shutdown signals to these processes when appropriate.
However, docker pid 1 process is node, and Node does not do orphan process recycling. So, if your application runs like a crawler, hang the process on PID 1 after execution, and slowly the container will BOOM.
Solution:
1. Start with '/bin/bash'. 2. Append '--init' to 'docker run' to initialize a docker process to PID 1. Docker provides a process that can reclaim all orphaned processes.Copy the code
For details, please refer to this case :juejin.cn/post/684490…
4. Reference
Juejin. Cn/post / 684490…
zhuanlan.zhihu.com/p/31654581
Hujb2000. Gitbooks. IO/docker – flow…
Docs.docker.com/install/lin…
Yq.aliyun.com/articles/11…
www.jianshu.com/p/ea4a00c6c…
The docker from entry to practice “legacy.gitbook.com/book/yeasy/…
K8S deployment solution :github.com/opsnull/fol…