preface

I have been discussing the topic of Web technology with my friends before, and I have always wanted to understand the knowledge of Web operation and maintenance, so I specially asked my friend Lao Hu, who has a lot of practical experience in Web operation and back-end technology, so he also provides a lot of help in this article. This article mainly introduces the basic knowledge and application field of Docker, and introduces the use of Docker through the actual deployment of a Web project.

As a front-end engineer, why study Docker? First, I will introduce Docker:

Docker is an open source application container engine developed based on the Go language, which allows us to package our applications and packages into a lightweight, portable container, and then publish it on any popular Linux machine, and can be virtualized. Containers are completely sandboked, have no interfaces to each other, and have a very low performance overhead.

Recall that our traditional way of Web application deployment is usually a Web application manually uploaded to the server, and manually install related, and environment, more advanced we can use the Jenkins to automate the deployment of our applications, including automated testing, and so on, although already solve the problem of most of our deployment, But if our servers change, or if we encounter scenarios that require deployment to multiple servers, the traditional operation becomes cumbersome. You might ask, does that happen? The answer is yes. Did end system B or Saas system development experience of friend may know the tedious, in order to customer safety and privatisation often require developers to enterprise configuration, and deployment of independent Web application, if you have hundreds of customers thousands of customers, we each deployment is obviously a very low efficiency, and can not ensure the consistency and stability of the environment, Because once the environment or package used by our Web system is updated, the application is likely to fail to Work normally. In this case, the Docker container technology can be used to solve this problem.

Moreover, the cloud computing services that have been popular in the past few years require standardization and rapid delivery most directly, and Docker technology is very suitable for such requirements.

At present, most enterprises are using Docker to realize automation, deployment efficiency and security in software development and deployment. As front-end engineers, they also need to master some Docker technology to better cooperate with the back-end and operation and maintenance to promote this process.

You will reap

  • Basic application scenarios and implementation architecture of Docker
  • Basic use of Docker
  • Deploy a Web application using Docker

The body of the

Before starting the text, let’s first look at the application scenarios of Docker, so that we can better understand why we should use it.

Docker

Docker’s three basic concepts are as follows:

  • Image: Docker Image, which is a complete root file system;
  • Container: The relationship between a mirror and a Container. Just like classes and instances in object-oriented programming, a mirror is a static definition and a Container is an entity that the mirror runs on. Containers can be created, started, stopped, deleted, suspended, etc.
  • Repository: Think of it as a code control center where images are stored.

It adopts the client-server (C/S) architecture pattern and uses remote API to manage and create Docker containers. Docker containers are created using the Docker image. The relationship between containers and images is similar to that between objects and classes in object-oriented programming. In order to facilitate your understanding, the author deliberately drew a Docker architecture diagram, as follows:

Docker basis

1.1 Host Virtualization (VM) and OS Virtualization (Container)

As can be seen from the comparison above, the essential differences between the two virtualization technologies are: host virtualization requires a set of child operating systems to run on top of the parent operating system; Operating system virtualization, on the other hand, manages subcontainers in a process-based manner, and the subcontainers share an operating system with the host.

Host Virtualization Operating System Virtualization
Isolation, Environment strong isolation, parent operating system underlying irrelevant You can only run similar operating systems and use similar libraries
network The network transmission efficiency is low and the startup is slow High transmission efficiency, fast start, fast response
Take up You have to increase the operating system footprint Take up less
security Subsystems are independent of the host system Risky, but manageable (Daemon)

1.2 Containerization involves kernel technology

  • namespaceNamespaces are the basis for resource isolation provided to each container, providing isolation for the following attributes
    • UTS: Each NameSpace has its own host or domain name. You can treat each NameSpace as an independent host.
    • IPC: Each container still uses the process interaction method in the Linux kernel to achieve interprocess communication
    • Mount: The file system of each container is independent. Net: the network of each container is isolated
    • User: The User and group IDS of each container are isolated, and each container has the root User
    • PID: Each container has its own process tree. The container is a process of the physical machine, so the process in the container is a thread of the physical machine
  • control groupControl groups provide resource limits for containers
    • Blkio – This subsystem sets input/output limits for block devices such as disks, SSDS, USB drives, etc.
    • CPU – This subsystem uses the scheduler to provide access to cGroup tasks on the CPU.
    • Cpuacct — This subsystem automatically generates reports on the CPU used by tasks in the CGroup.
    • Cpuset — This subsystem allocates separate CPU and memory nodes for tasks in cGroup.
    • Devices – This subsystem allows or denies access to devices by tasks in the CGroup.
    • Freezer — This subsystem suspends or resumes tasks in the CGroup.
    • Memory — This subsystem sets the memory limits used by tasks in cGroup and automatically generates reports on the memory resources used by those tasks.
    • Net_cls — This subsystem identifies network packets with a rank identifier (ClassiD) that allows Linux traffic control programs (TCS) to identify packets generated from specific Cgroups.
    • Ns — namespace subsystem

Run your first Docker program

2.1 installation docker

According to the docker official website installation document, it is strongly recommended not to use Windows operation (I have not tried to develop Docker related on Windows, although provided on the official website, but I do not know). You are advised to use OSX or Linux. Consider using centos 7 on domestic servers. This section describes how to install centos 7.

# 1. Clean up older docker installations
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
In order to introduce yum-config-manager, the other dependencies are docker itself
yum install -y yum-utils \
  										device-mapper-persistent-data \
  										lvm2
# 3. Add yum source
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
# 4. List the corresponding installation version
 yum list docker-ce --showduplicates | sort -r
# 5. Installation suggestions Select the version according to the actual situation, do not pursue the latest version
yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
# 6. Set boot to start Docker, and start Docker
systemctl enable docker && systemctl start docker
Copy the code

2.2 Run the first Docker program

Check docker service status
systemctl docker status
# When you run the first application, nginx is the container that the front end touches the most
The nginx stable version is 1.16.1
docker run -p 80:80 nginx:stable-alpine
Copy the code

3. Basic docker use

3.1 Introduction to the docker command

All docker commands can be read using the docker command line and can be queried using the docker –help

docker --help

Usage:	docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Options:
      --config string      Location of client config files (default "/Users/mac/.docker")
  -c, --context string     Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default "/Users/mac/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default "/Users/mac/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/Users/mac/.docker/key.pem") --tlsverify Use TLS and verify the remote -v, --version Print version information and quit Management Commands: builder Manage builds config Manage Docker configs container Manage containers context Manage contexts image Manage images network Manage networks node Manage Swarm nodes plugin Manage plugins secret Manage Docker secrets service Manage  services stack Manage Docker stacks swarm Manage Swarm system Manage Docker trust Manage trust on Docker images volume Manage volumesCopy the code

3.2 docker mirror

3.2.1 Common Usage

Object Syntax Max Access Description Imple mente d Description An image is a read-only Template with Instructions for creating a Docker Container Image is a read-only template used to guide the creation of a container. It is equivalent to an object class and the container is the corresponding instance. The common commands are as follows

# Download image
+ docker pull ubuntu

Using default tag: latest
latest: Pulling from library/ubuntu
423ae2b273f4: Pull complete
de83a2304fa1: Pull complete
f9a83bce3af0: Pull complete
b6b53be908de: Pull complete
Digest: sha256:04d48df82c938587820d7b6006f5071dbbffceb7ca01d2814f81857c631d44df
Status: Downloaded newer image for ubuntu:latest
docker.io/library/ubuntu:latest
# View the mirror list
+ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              72300a873c2c        2 weeks ago         64.2MB
# Export the image as a file to facilitate the use of docker image on non-networked machines
+ docker save  ubuntu -o ubuntu.tar
Delete the image from the image layer by removing the tag and then clearing the layer if there is no other image
+ docker rmi ubuntu
Untagged: ubuntu:latest
Deleted: sha256:72300a873c2ca11c70d0c8642177ce76ff69ae04d61a5813ef58d40ff66e3e7c
Deleted: sha256:d3991ad41f89923dac46b632e2b9869067e94fcdffa3ef56cd2d35b26dd9bce7
Deleted: sha256:2e533c5c9cc8936671e2012d79fc6ec6a3c8ed432aa81164289056c71ed5f539
Deleted: sha256:282c79e973cf51d330b99d2a90e6d25863388f66b1433ae5163ded929ea7e64b
Deleted: sha256:cc4590d6a7187ce8879dd8ea931ffaa18bc52a1c1df702c9d538b2f0c927709d
Import the image from the file to facilitate the use of docker image on machines that are not connected to the Internet+ docker load - I ubuntu.tar cc4590d6a718: Loading layer [=====================>] 65.58MB/65.58MB 8c98131d2d1d: Loading layer [= = = = = = = = = = = = = = = = = = = = = >] 991.2 kB / 991.2 kB 03 c9b9f537a4: Loading layer [= = = = = = = = = = = = = = = = = = = = = >] 15.87 kB / 15.87 kB 1852 b2300972: Loading layer [= = = = = = = = = = = = = = = = = = = = = >] 3.072 kB / 3.072 kB of the Loaded image: ubuntu: the latestBuild your own image. More on this in the next section
docker build -t $image:$tag $DockerfilePath
Create a new tag for the image, which is usually used to push to other repositories
docker tag ubuntu $image:$tag
# Push the image to remote Registry
docker push $image:$tag

Copy the code
3.2.2 Customizing A Mirror

copy-on-write: Docker image is a layer-based operating system. When the file system changes, it first copies a file from the read-only layer to the read and write layer. After the read and write of this layer is completed and submitted, a layer is accumulated on the original basis.

Trying to describe building Nginx yourself through physical installation, our steps for installing Nginx on a physical machine (source installation maximizes stability and uses new features) can be summarized in the following steps

# 1. Download the source package and dependenciesYum install pcre - devel zlib - devel openssl devel - GCC make wget HTTP: / / http://nginx.org/download/nginx-1.16.1.tar.gz/usr /local/source/
# 2. Set the module that builds nginx
./configure \
        --prefix=/usr/local/nginx \
        --conf-path=/usr/local/nginx/nginx.conf \
        --pid-path=/usr/local/nginx/nginx.pid \
        --with-http_ssl_module \
        --with-pcre \
        --with-http_gzip_static_module
# 3. Compile & install
make && make install
# 4. Start nginx
./nginx
Copy the code

This is actually done in Dockerfile as well.

1. Edit the file name Dockerfile

FROM centos:centos7.2.1511
MAINTAINER [email protected]
ADD http://nginx.org/download/nginx-1.16.1.tar.gz /usr/local/source/
RUN ["bash"."-c"."CD /usr/local/source &&\ tar -xf nginx-1.16.1.tar.gz --strip-components 1 && \ yum update -y > /dev/null 2>&1 && \ yum  install -y -q pcre-devel zlib-devel openssl-devel gcc make && \ ./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-pcre --with- http_gzip_static_module && \ make && make install && \ ln -s /usr/local/nginx/sbin/nginx /usr/local/bin/nginx && \ rm -rf /usr/local/source"]
CMD ["nginx"."-g"."daemon off;"]
Copy the code

2. Run the build in the current directory. The build is successful

+ docker build -t mynginx: 20200311. Step 1/4: FROM centos:centos7.2.1511 --> 9aec5c5fe4ba Step 2/4: The ADD/usr/http://nginx.org/download/nginx-1.16.1.tar.gzlocal/source/ Downloading [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = >] 1.033 MB / 1.033 MB - > ac3b840c5563 Step three: RUN ["bash"."-c"."CD /usr/local/source &&tar -xf nginx-1.16.1.tar.gz --strip-components 1 && yum update -y > /dev/null 2>&1 && yum install -y -q pcre-devel zlib-devel openssl-devel gcc make && ./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-pcre --with-http_gzip_static_module && make && make install && ln -s /usr/local/nginx/sbin/nginx /usr/local/bin/nginx && rm -rf /usr/local/source"]
 ---> 22efc447e0c2
Step 4/4 : CMD ["nginx"."-g"."daemon off;"]
 ---> 8f74bded71e9
Successfully built 8f74bded71e9
Successfully tagged mynginx:20200311
Copy the code

Docker run -d -p 8000:80 –name mynginx-container mynginx:20200311

Note: In fact, nginx builds are much more complicated than this. Here, you can paste the Dockerfile built by Nginx

See Dockerfile Reference for more instructions

3.3 the docker container

As mentioned above, a container is a run-time instance of an image. An image can run different container instances using different commands. The following are common commands for basic operations on containers

+ docker run --help Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]  Run acommand in a new container
# -d runs the container in the background and prints the container ID
-d, --detach  					Run container in background and print container ID
-e, --env list          Set environment variables
--env-file list         Read in a file of environment variables
--rm                    Automatically remove the container when it exits
-v, --volume list       Bind mount a volume
-w, --workdir string    Working directory inside the container
--restart string        Restart policy to apply when a container exits (default "no")
Copy the code
Docker/docker/docker/docker/docker/docker/docker/docker
+ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
a5f7f9710db8        mynginx:20200311    "Nginx - g 'daemon of..."59 seconds ago Up 58 seconds 0.0.0.0:80->80/ TCP mynginx-containerOpen the standard input stream and create a pseudo terminal for the container.
docker exec -it  mynginx-container bash
# View the container log
docker logs -f  mynginx-container
Copy the code

4. Actual cases

1. Prepare

* A front-end project * a machine with Docker installed * [Docker Hub](https://hub.docker.com/) Queries the mirror required for compilation [node](https://hub.docker.com/_/node?tab=tags),[nginx](https://hub.docker.com/_/n ginx)Copy the code
# clone the antd-admin.git project
git clone https://github.com/zuiidea/antd-admin.git
You can compile different languages in any environment with only Docker installed, eliminating the dependency on the environment
docker run --network=host --rm  -v "$(cd $(dirname .); pwd):/app"-W /app node:10 -Alpine3.9 YARN && Yarn buildCopy the code
  • Prepare the nginx.conf for reverse proxy rules
server { listen 80; server_name _; access_log /var/log/nginx/host.access.log main; location / { add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Headers X-Requested-With; add_header Access-Control-Allow-Methods GET,POST; root /app; index index.html index.htm; }}Copy the code
  • Build and script your own Dockerfile
FROM nginx:stable-alpine
ENV LANG en_US.UTF-8
COPY dist /app
COPY app.conf /etc/nginx/conf.d/default.conf
WORKDIR /app
Copy the code
  • For details, see the build.sh script
#! /bin/bash
current_dir=$(cd$(dirname .) ;pwd)

function compire(){
 docker run --network=host --rm  -v "$current_dir:/app" -w /app node:10-alpine3.9 yarn && yarn build
}
function package() {if[!-d "$current_dir/dist"];then
  compire
 fi
 docker build -t myapp:`date -u +"%Y%m%d"` $current_dir
}
function clean(){
 rm -rf $current_dir/dist
}

case "The $1" in
  compire)
   compire
  ;;
  package)
   package
  ;;
  clean)
  clean
  ;;
  *)
  echo "USAGE:$0 package | compire | clean "
esac
Copy the code
  • Run the containerdocker run -d -p 80:80 myapp:20200311

So far, the basic configuration is complete, you can manually try, based on Docker deployment of a Web application of their own.

This paper only involves the basic configuration of Docker. Later, I will continue to summarize and arrange with my friends when I have time. Docker network, Docker volume, Docker Daemon and other technologies. And take a node case deployment as the actual combat to teach you to land Docker automatic deployment in the actual project.

If you want to get more project complete source code, or want to learn more H5 games, Webpack, node, gulp, CSS3, javascript, nodeJS, Canvas data visualization and other front-end knowledge and practical, welcome in the public number “interesting talk front end” to join our technical group to learn and discuss together, Explore the boundaries of the front end together.

More recommended

  • Implementing a CMS full stack project from 0 to 1 based on nodeJS (Part 1)
  • Implement a CMS full stack project from 0 to 1 based on nodeJS (middle) (source included)
  • CMS full stack project Vue and React (part 2)
  • Develop a component library based on VUE from zero to one
  • Build a front-end team component system from 0 to 1 (Advanced Advanced Prerequisite)
  • Hand-write 8 common custom hooks in 10 minutes
  • Master Redux: Developing a Task Management platform (1)
  • Javascript Design Patterns front-end Engineers Need to Know in 15 minutes (with detailed mind maps and source code)
  • A picture shows you how to play vue-Cli3 quickly
  • “Front End Combat Summary” using postMessage to achieve pluggable cross-domain chatbot