Today, I will continue to learn Docker. The learning material is the B station tutorial said by crazy God

1. Principle of image loading

Docker uses UnionFS. When downloading an image file, you will see that many files are downloaded at once. This is UnionFS. It is a layered, lightweight, high-performance file system. UnionFS is similar to Git in that every time a file is modified, it is committed and recorded. It allows multiple different directories to be mounted to the same virtual file system. This is why images can share files and inherit images.

1.1 Why is the Linux system downloaded from Docker so small?

Because it is a stripped-down version, it removes a lot of unnecessary things, leaving only the most basic tools. It doesn’t even have a kernel, it uses the host kernel.

1.2 Layering Principle

Each mirror image is divided into several layers. All mirrors start with a base layer, and then each time a new environment is used, another layer is added. Let’s say an image uses Linux, so there’s a Linux layer, then the image needs Python, so there’s a Python layer, and so on. Therefore, if there are other images that need Python in the future, there is no need to download a new Python package as long as the version remains the same.

1.3 Container Running

The image file is read-only and immutable. So why can a container instance change? Because the container instance actually adds another layer, called the container layer, to the top of the image file. And all of our operations are done on this layer. There is no effect on the image file itself.

1.4 the commit

Commit is similar to Git. You can merge and commit your changed containers (image layer plus container layer) into a new image file on your own host (note that it is not uploaded to Registry). Later, it can be copied to others or uploaded to Registry.

2. Container data volume

If the data is placed on the container, it is lost as soon as the container is deleted. Hence the need for data persistence. That is, we need to be able to store our data in one place. Let containers share data. Such a technique is called data volume technology. This technique is used to mount (or map) the contents of the container to a directory of the host. This is a synchronization mechanism that synchronizes data to the directory on host when the contents of the directory mounted by the container change. Functions of data volumes:

  1. persistence
  2. Synchronous data
  3. Data sharing between containers

2.1 Data Volume Usage

As an argument to run:

$Docker run -v Host directory: container directory -it Ubuntu /bin/bash
Copy the code

Data volume is a two-way synchronization technology. Changing the data in the container changes the directory corresponding to the host, and vice versa. Although the data in the container is deleted, the data on the host changes as well. However, if the container is deleted directly, the data of the host is not deleted, but still retained, which is called data persistence.

2.2 Exercise: mysql Configuration

Before using image, it is better to consult the official documentation on hub to know how to configure parameters. For example, mysql can configure parameters as follows:

$ docker run -d -p 3310:3306 -v ~/Desktop/test/mysql/conf:/etc/mysql/conf.d -v ~/Desktop/test/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456  --name="mysql-test-volume" mysql
Copy the code
  • -d indicates background running
  • -p indicates port mapping. The mysql port is 3306
  • -V indicates that multiple directories can be synchronized
  • -e indicates the configuration. You need to check the official configuration parameter name on the hub. In this example, you need to set the password for mysql
  • –name Gives the container instance a name

Note –name is not the name of the mysql database, this is just a mysql service, no database has been created yet

2.3 Named mount and Anonymous Mount

If the -v command does not specify the host path but only writes the directory in the container, the mount is anonymous or named. The difference is the name

# Normal writing
$ docker run -v /home/test:/etc/nginx
# anonymous mount (no colons, only one container path)
$ docker run -v /etc/nginx
# named mount (colons are used to name this volume, although the host path is not written)
$ docker run -v juming:/etc/nginx
Copy the code

Note the difference between named and normal write host paths. The host path is marked with /, whereas named is just a name without /.

So where are the named mount and anonymous mount paths? It’s actually in the docker working directory. Or accurate said, in the/var/lib/docker/volumes/XXXX / _data while forming folder.

Generally use named mount more.

2.4 ro and rw

Docker run -v juming:/etc/nginx:ro docker run -v juming:/etc/nginx:ro Indicates read only and Read Write. The default is RW. If it’s ro, you can’t change the contents of the container directory. You can only synchronize the contents to the container by changing the host.

2.5 Data Volume Container

As mentioned earlier, data volumes allow multiple containers to share data. Again, we used the same parameter volumes-from

Start by creating a container normally
$ docker run -v ~/Desktop/test:/home --name="container01" -it ubuntu /bin/bash
The second container then inherits from the previous container
$ docker run --volumes-from container01 --name="container02" -it ubuntu /bin/bash
The third container can inherit from either container 1 or container 2
$ docker run --volumes-from container02 --name="container03" -it ubuntu /bin/bash
Copy the code

In the end, all of these containers are mounted to the same host directory, all of the data is shared, and no matter where the data changes, all of the other containers and hosts see the changes. Deleting one of the containers does not delete the shared data. Even if all containers are deleted, the data will not be deleted because it has been persisted locally.

–volumes-from container01 is the same as -v ~/Desktop/test:/home. You just don’t have to write the path, you just inherit it.

3. Dockerfile

The basics were covered in the previous article. Dockerfile parameters:

FROM ubuntu # Base mirror, everything is built from here
MAINTAINER # Maintainer info, who wrote the mirror
RUN The command to run when building an image, that is, how to build an image
ADD # Things to add. It will automatically decompress. Starting with Ubuntu and adding python packages and stuff like that. You can put a Python zip.
WORKDIR # mirror working directory. This is the initial directory after entering the container. But you can specify it, for example we use /bin/bash a lot
VOLUME Set the container volume
EXPOSE # specify exposed ports
CMD # specify the command to run after the container is created. Only the last CMD will take effect and will be overridden by the arguments added when the docker runs.
    # can be interpreted as a command executed by default, so it can be overridden and only one command can take effect.
ENTRYPOINT Docker run -- entryPoint -- docker run -- entryPoint
           Only one can be valid
           # Docker run-l: docker run-l: docker run-l: docker run-l
           The final effect is ls-al
           CMD can't do this
ONBUILD This command is executed when an inherited Dockerfile is built
COPY # ADD is similar to ADD, but copies files directly to the image.
ENV Set environment variables at build time.
Copy the code

Note: 1. For both CMD and ENTRYPOINT, only the last line takes effect. If you want to execute more than one command, either use && or write a script and then run CMD/ENTRYPOINT. 2. Generally, ENTRYPOINT contains commands that must be run. CMD to write default run commands.

The command to build the image is:

$docker build -f <Dockerfile > -t <name:tag>Copy the code