With the popularity of containerization technology, Docker has been applied more and more widely in the front-end field. Traditional front-end deployment requires us to package the project to generate a series of static files, then upload to the server, configure the Nginx file; If we use containerized deployment, the deployment operations are all command, centralized into a script can complete the complex deployment process. This article starts from the basis of Docker to understand various command operations of Docker.

This article was first published in the public number [front-end one read], more exciting content please pay attention to the latest news of the public number.

Docker profile

Docker is an open source engine that makes it easy to create a lightweight, portable, self-contained container for any application.

Docker is a Docker worker. Dockers generally carry containers. The greatest success of containers lies in the standardization of their products and the establishment of a complete transportation system. On a ship of hundreds of thousands of tons, a variety of fully loaded containers do not interact with each other; Therefore, it has standardized and intensive characteristics.

From the logo of Docker, we can also see that Docker’s idea comes from containers. Each application is equivalent to a different container, and each application has a different application environment. For example, a Python application requires a Python development environment deployed by the server, and a NodeJS application requires a NodeJS environment deployed by the server. Different environments may conflict with each other. Docker helps us isolate different environments.

Some students then thought, this is not the virtual machine to do work. Yes, virtual machines can help us isolate each environment very well. We can run macOS, Ubuntu and other virtual machines on Windows, or install Windows virtual machines on MacOS. However, the traditional virtual machine technology is to virtual a set of hardware, run a complete operating system on it, and then run the required application process on this system, so that a computer can only run a small number of virtual machines.

But Docker uses container technology that is lighter and faster than virtual machines. The application processes in the container run directly on the host kernel, there is no kernel of its own in the container, and there is no hardware virtualization. As a result, containers are much lighter than traditional virtual machines. The following chart compares the differences:

Comparison summary:

features The container The virtual machine
Start the Second level Minutes of class
The hard disk to use Generally for MB Generally for GB
System resources 0 ~ 5% 5 ~ 15%
performance Close to the native Weaker than the native
System support Supports thousands of containers on a single machine Usually dozens

Docker advantage

Docker has the following advantages:

  • More efficient use of system resources
  • Faster startup time
  • Consistent operating environment
  • Continuous delivery and deployment
  • Easier migration
  • Easier maintenance and extension

Docker is usually used in the following scenarios:

  • Automated packaging and publishing of Web applications;
  • Automated testing and continuous integration, release;
  • Deploy and adjust databases or other backend applications in a service environment;
  • Build from scratch or extend existing OpenShift or Cloud Foundry platforms to build your own PaaS environment.

The basic concept

There are three basic concepts in Docker:

  • Image
  • Container
  • Repository

By understanding the basic concepts of Docker, we understand the entire life cycle of Docker.

First of all, let’s understand the concept of image. Docker image is a special file system. In addition to providing programs, libraries, resources, configuration and other files required by the container during runtime, Docker image also contains some configuration parameters prepared for runtime (such as anonymous volumes, environment variables, users, etc.). The image does not contain any dynamic data and its contents are not changed after the build.

If experience loading system of children’s shoes, you can put the Docker mirror as a mirror image of the operating system (ISO file), it is a regular file, from a mirror, we can hold on many computers, become one of the operating system (containers), each system is the same, but you can choose custom installation.

Unlike system images, Docker images are not packaged as a whole into a file like ISO files, but designed into a layered storage architecture, which is composed of multiple layers of files rather than a single file.

When a mirror is built, it is built layer by layer, with the front layer being the foundation for the next. After each layer is built, there are no more changes, and any changes on the next layer only happen on your own layer.

The second is the concept of containers. From a programming point of view, the relationship between images and containers is more like that between classes and instances. One or more containers can be started from an image; A mirror is a static definition, and a container is an entity of the mirror runtime. Containers can be created, started, stopped, deleted, paused, and so on.

As mentioned earlier, images use tiered storage, as do containers. Each container runtime is based on an image, on which a storage layer of the current container is created. We can call this storage layer prepared for the container runtime reads and writes the container storage layer.

The container storage layer lives the same as the container. When the container dies, the container storage layer dies with it. Therefore, any information stored in the container storage layer is lost when the container is deleted.

Therefore, the container should not write any data to its storage layer, and the container storage layer should remain stateless. All file writes should use data volumes or bind host directories, where reads and writes skip the container storage layer.

Finally, there is the concept of a Repository. When we build an image, we can run it locally, but if we want to distribute it to other users on the network, we need a centralized server to store and distribute the image. Repository is one such tool, similar to Github.

Repository is a collection of images of the same class, containing Docker images with different tags. For example, Ubuntu is the name of the Repository, which has different tags in it, such as 16.04, 18.04, When obtaining an image from a repository, you can specify the version of the image in the < repository name >:< label > format, such as Ubuntu18.04; If you ignore the tag, use Latest as the default tag.

The mirror

As we mentioned above, mirroring is one of the three basic components of Docker; To run the container, you need to have the corresponding image locally. If not, it will be downloaded from the remote repository. So let’s take a look at mirroring.

Find the mirror

We can search for images from the Docker Hub

docker search ubuntu
Copy the code

Search results:

The search list contains the following fields:

  • NAME: indicates the NAME of the mirror repository source
  • DESCRIPTION: indicates the DESCRIPTION of the mirror
  • STARS: Similar to star on Github.
  • OFFICIAL: Is it OFFICIAL with Docker
  • AUTOMATED: Automatic construction.

Access to the mirror

To get the image, we can use the docker pull command, which has the following format:

docker pull <repository>:<tag>
Copy the code

Again, take Ubuntu:

$ docker pull ubuntu:16.04

16.04: Pulling from library/ubuntu
58690f9b18fc: Pull complete
b51569e7c507: Pull complete
da8ef40b9eca: Pull complete
fb15d46c38dc: Pull complete
Digest: sha256:0f71fa8d4d2d4292c3c617fda2b36f6dabe5c8b6e34c3dc5b0d17d4e704bd39c
Status: Downloaded newer image forUbuntu: 16.04 docker. IO/library/ubuntu: 16.04Copy the code

We see that the last line docker. IO shows that this was pulled from the official repository.

From the download process we can see the concept of tiered storage we mentioned above, that is, the image is made up of multiple layers of storage; Downloading is also done in layers, rather than as a single file; Therefore, if a layer in a download has been downloaded by another image, it says Already exists. The first 12 bits of the ID of each layer are given during the download process, and the sha256 summary of the image is given after the download.

Docker image warehouse is divided into official warehouse and unofficial, the official image is pulled from Docker Hub; If you want to obtain it from a third-party mirror warehouse, you can add the service address of the warehouse before the name of the warehouse:

Docker pull < docker Registry address: port ><repository>:<tag>Copy the code

List the mirror

With the following command, we can list the images that have been downloaded locally:

$ docker image ls
Copy the code

The following list appears when you run the command:

The list contains the warehouse name, label, mirror ID, creation time, and occupied space; We see that there are two images of Mongo, but the images have different labels.

By default, the ls command will list all the mirrors, but it is not convenient to view when there are many local mirrors. Sometimes, we want to list some of the mirrors, in addition to Linux grep command, you can also follow the ls command parameters:

$docker image ls Mongo REPOSITORY TAG Image ID CREATED SIZE Mongo latest DFDA7a2CF273 2 months ago 693MB Mongo 4.0 e305b5d51c0a 2 months ago 430MBCopy the code

Remove the mirror

To delete a local mirror, run the rm command:

$docker image rm [options] < image 1> [< image 2>...Copy the code

Or abbreviated as rmI command:

$docker rmi [options] < mirror 1> [< mirror 2>...]Copy the code

The < mirror > can be the short ID of the mirror, the long ID of the mirror, the name of the mirror, or the summary of the mirror. Docker image ls is already a short ID, we can also take the first three characters to delete; Let’s say we want to delete mongo:4.0:

$ docker rmi e30
Copy the code

Build the mirror

In addition to using the official image, we can build our own image; They build on top of other images, such as Node, Nginx, etc. The image is built using a Dockerfile, which is a text file containing the instructions and instructions required to build the image.

Create a new Dockerfile in a blank directory:

mkdir mynginx
cd mynginx/
touch Dockerfile
Copy the code

We write the following to the Dockerfile:

FROM nginx
RUN echo '<h1>Hello, This is My Nginx</h1>' > /usr/share/nginx/html/index.html
Copy the code

Here Dockerfile is very simple, there are two commands: FROM and RUN, we in Dockerfile facing the command for detailed introduction; We build the image using the build command, which has the following format:

Docker build [options] < context path /URL/->Copy the code

Therefore, we execute the command in the directory where the Dockerfile resides:

$ docker build -t mynginx:v3 .
Copy the code

When we run the command, we can see that the image is layered, following the steps in Dockerfile:

Once the build is successful, we list all the images and see the myNginx we just built. In the command above, we can see that there is a. At the end of the command, which represents the current directory and will give an error message if the directory is not written. If it is a context path, what is the context path used for? To understand what this path does, we first need to understand Docker’s architecture.

Docker is a typical C/S architecture application, which can be divided into Docker client (usually knock Docker command) and Docker server (Docker daemon). Docker client interacts with the server through REST API. Every instruction sent by Docker client will be converted into the form of REST API call and sent to the server. The server will process the request sent by the client and give response.

Therefore, on the surface, we seem to perform various Docker functions on the machine, but in fact, they are all completed on the Docker server, including the construction of Docker images, container creation, container running and other work are completed by the Docker server, and the Docker client only assumes the role of sending instructions.

Understanding Docker’s architecture will help you understand the working principle of Docker’s image construction. Its process is roughly as follows:

  1. Run the build command
  2. The Docker client will package all the files in the context path specified after the build command into a tar package and send it to the Docker server.
  3. Docker server received the tar package sent by the client, and then decompressed, according to the instructions in Dockerfile for hierarchical construction of the image;

So the context path is essentially the directory on the server where the instructions in the Dockerfile work; For example, in Dockerfile, we often need to copy code to the image, so we would write:

COPY ./package.json /app/
Copy the code

The package.json file to be copied here is not necessarily in the directory where the Docker build command is executed, nor in the same directory as the Dockerfile file, but in the package.json directory specified by the Docker build command.

The container

Having introduced images, we come to the third core Docker concept: containers. Containers are run-time instances of images, and we can start one or more containers from an image.

Container management includes creating, starting, stopping, entering, importing, exporting, and deleting containers. Let’s take a look at the specific commands and effects of each operation.

Creating a startup container

The command used to create and start a container is docker run, which is sometimes followed by long options, but the basic syntax is as follows:

$docker run [options] Mirror name [command] [parameters...]Copy the code

It can come with some common options:

  • -d: The container is running in the background
  • -t: reassigns a pseudo-input terminal to the container. It is usually used together with -i
  • -I: Runs the container in interactive mode, usually in conjunction with -t
  • -p: indicates random port mapping
  • -p: specifies the port mapping
  • –name: Specifies a name for the container
  • -e: Sets environment variables
  • — DNS: specifies the DNS server used by the container
  • -m: Sets the maximum memory usage of a container
  • – net = “bridge” : specify the container’s network connection type, support bridge/host/none/container: four types;
  • –link: Link to another container
  • -v: Binds a volume
  • –rm: Deletes the container after exiting the container

We create a Hello World container:

$ docker run hello-world
Copy the code

We can’t do anything with the container. We can add the it option (short for -i and -t) to get Docker to assign a terminal to the container:

$ docker run -it ubuntu:18.04 /bin/bash
root@fdb133227c9a:/# pwd
root@fdb133227c9a:/# ls
root@fdb133227c9a:/# exit
Copy the code

We can now operate inside the container, exit the terminal using the exit command or CTRL + D; If we look at the running container after exiting the container, there is no container information.

We usually want the container to run in the background, so we add -d:

$docker run - itd ubuntu: 18.04 / bin/bash ad4d11b6d3b6a2a37fc702345a09fa0a5671f5b3943def7963994535e8600f7bCopy the code

The container is no longer presented as a command line, but instead as a long string of alphanumeric combinations, which are the unique ID of the container; Use the ps command to check the status of the container and see that our container has been silently running in the background:

When using the run command to create a container, Docker does the following in the background:

  • Check whether the specified image exists locally and download it from Registry if it does not
  • Create and start a container with an image
  • Assign a file system and mount a read-write layer outside the read-only mirror layer
  • Bridge a virtual interface from the host host’s configured bridge interface to the container
  • Configure an IP address from the address pool to the container
  • Execute the user-specified application
  • The container is terminated after execution

Termination of the container

We can use the stop command to stop the container. If the application in the container terminates or an error occurs, the container automatically terminates. We can terminate the corresponding container by using the ps command to view the container’s short ID:

$ docker stop ad4d11b6d3b6
Copy the code

For terminated containers, the ps command can no longer see them, and we can look at them with the -a option (for all), whose STATUS has changed to Exited:

$docker ps a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS ad4d11b6d3b6 Ubuntu :16.04 "/bin/bash" 2 hours ago Exited (0) 2 minutes agoCopy the code

Docker restart docker restart docker restart docker restart docker restart docker restart docker restart docker restart docker restart docker restart docker restart

Into the container

Sometimes we need to enter the container for some operations, such as entering the Nginx container for smooth restart, we can use Docker Attach or Docker exec to enter, but the exec command is recommended.

Let’s first look at using the attach command:

$ docker attach ad4d11b6d3b6
root@ad4d11b6d3b6:/# exit
Copy the code

When we exit from the terminal, the container stops; Using the exec command does not cause the container to stop.

If only the -i parameter is used, the interface does not have the familiar Linux command prompt because no pseudo-terminal is assigned. However, you can still see the running result when you run the command. When using the -i and -t arguments, you can see the familiar Linux command prompt.

$ docker exec -i ad4d11b6d3b6 bash
ls
bin
boot
dev
etc
home
lib
pwd
/

$ docker exec -it ad4d11b6d3b6 bash
root@ad4d11b6d3b6:/# exit
Copy the code

Note that the container we entered needs to be in a running state. If not, an error will be reported:

Error response from daemon: Container ad4d11b6d3b6 is not running
Copy the code

Viewing container Logs

We often need to do some monitoring of the container’s running process to see how it is logged and whether errors are reported, etc. Use the logs command to get the logs of the container.

$ docker logs ad4d11b6d3b6
Copy the code

It also supports the following parameters:

  • -f: tracks the output of logs
  • –since: Displays all logs from a start time
  • -t: displays the timestamp
  • –tail: Lists only the latest N container logs

The logs command will display all logs since the container was started. If the container has been running for a long time, it will list a large number of logs. We can add tail to display only the latest logs:

# list the 10 most recent logs
$ docker logs --tail=10 ad4d11b6d3b6
Copy the code

Analysis of the container

For containers that have been created, inspect can be used to view the underlying basic information of the container, including the container ID, creation time, running status, startup parameters, directory mount, network configuration, and so on. In addition, this command can also view the docker image information, which is in the following format:

Docker inspect [options] < mirror 1> [< mirror 2>...]Copy the code

Inspect supports the following options:

  • -f: specifies the returned value of the template file.
  • -s: Displays the total file size.
  • –type: Returns JSON for the specified type.

When run, basic container information is displayed in JSON format:

But with such a large chunk of text, it’s very difficult for us to get information that’s useful to us; In addition to grep for filtering (grep everything), we can also use the -f argument:

Get the container nameDocker inspect -f {{.name}} < container ID>Get container directory mount informationDocker inspect -f {{.networksettings.mounts}} < container ID>Get information about the container network SettingsDocker inspect -f {{.NetworkSettings}}Get the IP address of the containerDocker inspect - f {{. NetworkSettings. IPAddress}} < ID > containerCopy the code

Remove the container

If a container is no longer used, we can use the rm command to delete it:

$ docker rm ad4d11b6d3b6
Copy the code

To remove a running container, add the -f argument:

$ docker rm -f ad4d11b6d3b6
Copy the code

Data management

As we mentioned above, containers are kept stateless, that is, they are deleted as they are used, and no data records are kept. When using Docker, some containers that need to retain data are often used, such as mysql and mongodb, which often need to persist the data in the container. Or to share data between multiple containers, which involves data management of containers. There are two main ways:

  • Data Volumes
  • Mounts a host directory (Bind mounts).

Data volume

A data volume is a special directory that can be used by one or more containers, bypassing UFS and providing a number of useful features:

  • Data volumes can be shared and reused between containers
  • Changes to data volumes take effect immediately
  • Data volume updates do not affect mirroring
  • The data volume will always exist by default, even if the container is deleted

First we create a data volume:

$ docker volume create my-vol
my-vol
Copy the code

We can list all our local data volumes by ls:

$ docker volume ls
DRIVER    VOLUME NAME
local     my-vol
Copy the code

The inspect command can also view the details of our data volume:

$ docker volume inspect my-vol
[
    {
        "CreatedAt": ""."Driver": "local"."Labels": {},
        "Mountpoint": "/data/programs/docker/volumes/my-vol/_data"."Name": "my-vol"."Options": {},
        "Scope": "local"}]Copy the code

When starting the container, use –mount to mount the data volume to the container’s directory (multiple mount points can be available) :

$ docker run -d -P --name web --mount source=my-vol,target=/usr/share/nginx/html nginx
# via the -v abbreviation
$ docker run -d -P --name web -v my-vol:/usr/share/nginx/html nginx
Copy the code

We use the inspect command above to view the container’s mount information:

$ docker inspect -f "{{.Mounts}}" web
[{volume my-vol /data/programs/docker/volumes/my-vol/_data /usr/share/nginx/html local z true }]
Copy the code

Data volumes are designed to persist data and have a lifetime independent of the container; Therefore, even if we delete the container, the data of the data volume still exists, and there is no garbage collection mechanism to deal with the data volume without any container reference. We can use docker rm -v command to delete the data volume at the same time when deleting the container, or manually delete the data volume:

$ docker volume rm my-vol
Copy the code

An ownerless data volume can take up a lot of space. To clean it up, use the following command (use caution!) :

$ docker volume prune
Copy the code

Mount the directory

We found that the directories mounted to the above data volumes were all under the installation path of docker, which was not conducive to our maintenance. We could mount the customized directories directly.

$ docker run -d -P --name web --mount source=/home/nginx,target=/usr/share/nginx/html nginx
# via the -v abbreviation
$ docker run -d -P --name web -v /home/nginx:/usr/share/nginx/html nginx
Copy the code

The path of the mounted local directory must be an absolute path, not a relative path.

If the local path does not exist, it is automatically generated.

By default, the default permissions of the mounted host directory are read and write. You can set the host directory to read-only by adding readonly.

$ docker run -d -P --name web -v /home/nginx:/usr/share/nginx/html:ro nginx
Copy the code

With readonly, an error will be reported if we modify or create a new file in /usr/share/nginx/html.

Nginx /home/nginx /home/nginx /home/nginx /home/nginx

If you think it’s good, check out my Nuggets page. Please visit Xie xiaofei’s blog for more articles