1 What is a Docker

Docker is an open source application container engine, which is based on the Go language and complies with the Apache2.0 protocol.

Docker allows developers to package their applications and dependencies into a lightweight, portable container that can then be distributed to any popular Linux machine, as well as virtualization.

Containers are completely sandboxed, have no interfaces with each other (like iPhone apps), and most importantly, have very low performance overhead.

Docker, as a software containerization platform, allows developers to build applications, package them together with their dependent environment into a container, and then easily distribute and apply them to any platform.

Ideas: containers, standardization, isolation

Core: mirror, container, warehouse

1) Mirror – copy program

define

Docker image can be regarded as a special file system. In addition to providing programs, libraries, resources, configuration files required by the container runtime, Docker image also contains some configuration parameters prepared for the runtime (such as anonymous volumes, environment variables, users, etc.).

role

A snapshot similar to a virtual machine is used to create a new container.

Features:

The image does not contain any dynamic data and its contents are not changed after the build.

2) Containers – Containers

A place to run a program

The image is static and each layer is just readable, while the container is dynamic and runs the application we specify.

3) Warehouse – the place where mirrors are stored

The location of the image is similar to git

The process of running a program with Docker can be simply understood as follows: go to the repository and pull the image to the local, and then run the image as a container with a command.

2 Docker architecture and internal components

LXC: Linux container technology, shared kernel, containers share host resources, use namespace and Cgroups to restrict and isolate resources.

Cgroups (Control groups) : a mechanism provided by the Linux kernel to limit the resources of one or more processes. For example, CPU, memory and other resource usage limits.

NameSpace: A mechanism provided by the Linux kernel to limit the isolation of single or multi-process resources. A process can belong to more than one namespace. The Linux kernel provides six namespaces: UTS, IPC, PID, Network, Mount, and User.

AUFS (Advanced Multi Layered Unification Filesystem) : An advanced multi-level unified file system, a type of UFS, where each branch can specify readonly, readwrite, and Whiteout-able permissions. Generally, only the uppermost branch of AUFS has read and write permissions. Other branches have read-only permissions.

UnionFS (UFS) : a unified file system that mounts directories in different locations to the same virtual file system, forming a hierarchical model. A member directory is called a branch of a virtual file system.

3 What are the advantages of Docker

Lightweight resource use:

Rather than virtualizing the entire operating system, containers are isolated only at the process level, again using the host kernel.

Mobile:

All dependencies of the application are packaged inside the container so that it can run on any Docker-enabled host system.

Predictable:

The host doesn’t care what’s running inside the container, the container doesn’t care what platform the host is running on, standalone or cloud, these interfaces are standard, so the interactions between them are predictable.

Simplified procedures:

Docker allows developers to package their applications and dependencies into a portable container and then distribute them to any popular Linux machine for virtualization. Docker changes the way of virtualization, so that developers can directly put their work into Docker for management. Convenience and speed has been the biggest advantage of Docker. Tasks that used to take days or even weeks can be completed in seconds under the processing of Docker containers.

Avoid choophobia:

If you have a phobia of choice, or a senior sufferer. Docker helps you pack your tangle! Docker images; Docker image contains the operating environment and configuration, so Docker can simplify the deployment of multiple application instances. Web applications, backend applications, database applications, big data applications such as Hadoop clusters, message queues, and so on can all be packaged and deployed as a single image.

Savings:

On the one hand, with the advent of cloud computing era, developers do not need to configure expensive hardware in pursuit of effect. Docker has changed the mindset that high performance inevitably leads to high price. The combination of Docker and cloud makes cloud space more fully utilized. It not only solves the problem of hardware management, but also changes the way virtualization is done.

4 Differences between VMS and containers

Use KVM as an example to compare with Docker

  • The startup time

    • Docker in seconds, KVM in minutes.
  • lightweight

    • The size of a container image is usually expressed in M, and the size of a VM image is expressed in G.

    • The container consumes less resources and is faster than virtual machine deployment.

  • performance

    • Container sharing host kernel, system-level virtualization, less resources, no Hypervisor layer overhead, the performance of the container is basically similar to physical machines.

    • VMS require the Hypervisor layer to virtualize some devices and provide complete GuestOS. Therefore, the virtualization overhead is high and the vm performance is lower than that of a container.

  • security

    • Because the shared host kernel is isolated only at the process level, the isolation and stability are not as good as that of virtual machines. Containers have certain permissions to access the host kernel, causing security risks.
  • Use requirement

    • KVM requires hardware CPU virtualization to implement full virtualization.

    • Containers share a host kernel and can run on mainstream Linux distributions regardless of CPU virtualization.

5 What are Docker application scenarios

  • Application packaging and deployment automation

    • Build a standardized operating environment;

    • At present, most schemes deploy the running environment on physical machines and virtual machines, but the problem is that the environment is messy and the integrity migration is difficult, and the container is ready to use.

  • Automated testing and continuous integration/deployment

    • Automated build images and good REST apis integrate well into a continuous integration/deployment environment.
  • Deployment and elastic scaling

    • Because the container is application-level, the resource footprint is small and the deployment speed of elastic scaling is faster.
  • Micro service

    • Docker container Isolation technology formally responds to the concept of microservices by putting business modules into containers for operation. The reusability of containers greatly increases the expansibility of business modules.

Pay attention and don’t get lost

All right, everybody, that’s all for this article. All the people here are talented. As I said before, there are many technical points in PHP, because there are too many, it is really difficult to write, you will not read too much after writing, so I have compiled it into PDF and document, if necessary

Click on the code: PHP+ “platform”

As long as you can guarantee your salary to rise a step (constantly updated)

I hope the above content can help you. Many PHPer will encounter some problems and bottlenecks when they are advanced, and they have no sense of direction when writing too many business codes. I have sorted out some information, including but not limited to: Distributed architecture, high scalability, high performance, high concurrency, server performance tuning, TP6, Laravel, YII2, Redis, Swoole, Swoft, Kafka, Mysql optimization, shell scripting, Docker, microservices, Nginx, etc