Since its release in 2013, Docker has been widely touted as a potential game-changer for the software industry.

However, many people are not clear about what Docker is, what problems it is supposed to solve, and where the benefits lie. This article explains it in detail to help you understand it, along with easy-to-understand examples to teach you how to use it in your daily development.

First, the environment configuration problem

One of the biggest hassles of software development is environment configuration. How do you know which machines your software will run on?

Users must ensure two things: setup of the operating system and installation of various libraries and components. Only if they are correct can the software run. To install a Python application, for example, the computer must have a Python engine, various dependencies, and possibly environment variables.

If some of the older modules are incompatible with the current environment, that’s a problem. Developers often say, “It works on my machine,” implying that other machines probably won’t.

Environment configuration is so troublesome, change a machine, will have to start again, time-consuming. Many people think, can fundamentally solve the problem, software can be installed with the environment? That is, when you install it, you copy exactly the original environment.

2. Virtual machines

A virtual machine is one solution with an environment installation. It can run one operating system inside another, such as Linux inside Windows. The application has no sense of this because the virtual machine looks exactly like the real system, whereas to the underlying system, the virtual machine is just a normal file that can be deleted when no longer needed, with no impact on the rest of the system.

Although users can restore the original software environment through virtual machines. However, this scheme has several drawbacks.

(1) More resources are occupied

The virtual machine occupies some memory and hard disk space. While it is running, other programs cannot use these resources. Even if the applications in the virtual machine actually use only 1MB of memory, the virtual machine still needs several hundred MB of memory to run.

(2) Many redundant steps

A VM is a complete operating system. Some system-level operations, such as user login, cannot be skipped.

(3) Slow start

The virtual machine takes as long as it takes to start the operating system. It may take a few minutes for the application to actually run.

Linux containers

Because of these shortcomings of virtual machines, Linux has developed another virtualization technology: Linux Containers, or LXC.

Rather than emulating a complete operating system, the Linux container isolates processes. In other words, there is a protective layer over the normal process. For the process inside the container, its access to various resources is virtual, thus achieving isolation from the underlying system.

Because containers are process-level, they have many advantages over virtual machines.

(1) Fast start

An application in a container is directly a process in the underlying system, rather than a process in a VIRTUAL machine. So starting the container is like starting a process on the machine, rather than an operating system, which is much faster.

(2) Less resource occupation

The container occupies only needed resources and does not occupy unused resources. The VIRTUAL machine is a complete operating system, so it inevitably takes up all resources. In addition, multiple containers can share resources, and virtual machines have exclusive resources.

(3) Small size

A container contains only the components used, whereas a virtual machine is a package of the entire operating system, so a container file is much smaller than a virtual machine file.

In short, containers are a bit like lightweight virtual machines that provide a virtualized environment at a much lower cost.

What is a Docker?

Docker is a package for Linux containers and provides an easy-to-use interface for container use. It is currently the most popular Linux container solution.

Docker packages the application and its dependencies in a single file. Running this file generates a virtual container. Programs run in this virtual container as if they were running on a real physical machine. With Docker you don’t have to worry about the environment.

Overall, Docker interface is quite simple, users can easily create and use containers, put their own applications into the container. Containers can also be versioned, copied, shared, and modified just like normal code.

5. The purpose of Docker

The main uses of Docker currently fall into three categories.

(1) Provide a disposable environment. For example, testing other people’s software locally, and providing unit testing and build environments for continuous integration.

(2) Provide flexible cloud services. Because Docker containers can be opened and closed at any time, it is very suitable for dynamic expansion and shrinkage.

(3) Establish the micro-service architecture. With multiple containers, a single machine can run multiple services, so a microservice architecture can be modeled locally.

Vi. Docker installation

Docker is an open source commercial product available in two versions: Community Edition (CE) and Enterprise Edition (EE). The Enterprise edition includes some paid services that individual developers generally don’t use. The following introductions are for the community edition.

For details about how to install Docker CE, see the official documentation.

  • Mac
  • Windows
  • Ubuntu
  • Debian
  • CentOS
  • Fedora
  • Other Linux distributions

After the installation is complete, run the following command to verify that the installation is successful.

$docker version # or $docker infoCopy the code

Docker requires the user to have sudo permission. In order to avoid entering sudo every time, you can add the user to the Docker user group (official documentation).


$ sudo usermod -aG docker $USER
Copy the code

Docker is the server —- client architecture. Docker service is required to run the docker command on the host. If the service is not started, you can start it with the following command (official documentation).

$sudo systemctl start $sudo systemctl start dockerCopy the code

6. Image file

Docker packages the application and its dependencies inside an image file. Only from this file can the Docker container be generated. The image file can be thought of as a template for the container. Docker generates instances of containers from the image file. The same image file can generate multiple container instances running at the same time.

Image is a binary file. In practice, an image file is usually generated by inheriting another image file and adding some personalization Settings. For example, you can add an Apache server to an Ubuntu image to create your image.

# list all image files on this machine. $docker image rm [imageName]Copy the code

Image files are universal, and can be copied from one machine to another. In general, to save time, we should try to use image files made by others rather than making them ourselves. Even if you want to customize it, it should be based on someone else’s image file, not from scratch.

To facilitate sharing, image files can be uploaded to the online warehouse after they are made. Docker Hub is the most important and commonly used image repository. It is also possible to sell your own image files.

Hello world

Next, let’s get a feel for Docker through the simplest image file “Hello World”.

First, run the following command to grab the image file from the repository to local.


$ docker image pull library/hello-world
Copy the code

In the above code, docker image pull is the command to grab the image file. Library /hello-world is the location of the image file in the repository, where library is the group where the image file is located, and hello-world is the name of the image file.

Since the image file provided by Docker is stored in the library group, it is the default group and can be omitted. Therefore, the command above can be written as follows.


$ docker image pull hello-world
Copy the code

After successfully fetching, you can see the image file on the machine.


$ docker image ls
Copy the code

Now, run the image file.


$ docker container run hello-world
Copy the code

The docker container run command generates a running container instance from the image file.

Note that the docker container run command automatically captures image files. If an image file is not specified locally, it is automatically fetched from the repository. Therefore, the previous Docker image pull command is not a required step.

If it runs successfully, you should read the following output on the screen.

$ docker container run hello-world Hello from Docker! This message shows that your installation appears to be working correctly. ... .Copy the code

After this prompt, hello World stops running and the container terminates automatically.

Some containers do not terminate automatically because they provide services. For example, installing an image running Ubuntu allows you to experience the Ubuntu system from the command line.


$ docker container run -it ubuntu bash
Copy the code

For containers that do not terminate automatically, you must terminate them manually using the Docker container kill command.


$ docker container kill [containID]
Copy the code

Container files

The image file generates a container instance, which is itself a file, called a container file. That is, once the container is generated, there are two files: the image file and the container file. And closing the container doesn’t delete the container file, it just stops running.

$docker container ls $docker container ls --allCopy the code

The output from the above command includes the container ID. This ID is required in many places, such as the Docker container kill command that terminates a container run in the previous section.

Container files that are terminated still occupy disk space. You can run the docker container rm command to delete them.


$ docker container rm [containerID]
Copy the code

After running the above command, use the docker container ls –all command to find that the deleted container file has disappeared.

Dockerfile file

Once you’ve learned how to use an image file, the next question is, how can you generate an image file? If you want to promote your own software, make your own image files.

This will require a Dockerfile file. It is a text file that configures the image. Docker generates a binary image file from this file.

Here is an example of how to write a Dockerfile file.

Example: Make your own Docker container

Here I will take koA-Demos project as an example to introduce how to write Dockerfile file, which enables users to run koA framework in Docker container.

To prepare, download the source code.


$ git clone https://github.com/ruanyf/koa-demos.git
$ cd koa-demos
Copy the code

10.1 Writing a Dockerfile File

First, in the root directory of the project, create a new text file. Dockerignore and write the following.


.git
node_modules
npm-debug.log
Copy the code

The above code says that these three paths should be excluded and not packaged into the image file. If you have no path to exclude, this file can be left uncreated.

Then, in the root directory of the project, create a new text file Dockerfile and write the following.

FROM the node: 8.4 the COPY/app WORKDIR/app RUN NPM install - registry=https://registry.npm.taobao.org EXPOSE was 3000Copy the code

The above code consists of five lines with the following meanings.

  • FROM the node: 8.4: The image file inherits the official Node image file. The colon represents the label8.4That is, version 8.4 of Node.
  • COPY . /app: All files in the current directory (except.dockerignoreExcluded path), are copied into the image file/appDirectory.
  • WORKDIR /app: Specifies the next working path as/app.
  • RUN npm installIn:/appDirectory, runnpm installCommand to install dependencies. Note that all dependencies will be packaged into the image file after installation.
  • EXPOSE 3000: Expose container port 3000 to allow external connections to this port.

10.2 Creating an Image File

Once you have the Dockerfile file, you can create an image file using the docker image build command.

$docker image build-t koa-demo. # $docker image build-t koa-demo:0.0.1Copy the code

In the code above, the -t argument is used to specify the name of the image file, followed by a colon to specify the label. If not specified, the default tag is Latest. The last dot represents the path where the Dockerfile file is located. The above example is the current path, so it is a dot.

If it runs successfully, you can see the newly generated image file koa-demo.


$ docker image ls
Copy the code

10.3 Generating a Container

The docker container run command generates containers from the image file.

$docker container run -p 800:3000 it koa-demo /bin/bash # or $docker container run -p 800:3000 it koa-demo:0.0.1 /bin/bashCopy the code

The meanings of the parameters in the preceding command are as follows:

  • -pParameters: Port 3000 of the container is mapped to port 8000 of the native.
  • -itParameters: The container’s Shell maps to the current Shell, and the commands you type in the native window are passed to the container.
  • Koa - demo: 0.0.1: The name of the image file (if there is a label, you need to provide the latest label by default).
  • /bin/bash: The first command executed internally after the container is started. Here you start Bash to ensure that the user can use the Shell.

If all is well, running the above command returns a command line prompt.


root@66d80f4aaf1e:/app#
Copy the code

This means that you are already inside the container, and the prompt returned is the Shell prompt inside the container. Execute the following command.


root@66d80f4aaf1e:/app# node demos/01.js
Copy the code

At this point, the Koa framework is up and running. Open the browser on your machine, go to http://127.0.0.1:8000, and the page says “Not Found” because the demo did Not write a route.

In this example, the Node process runs in the virtual environment of the Docker container, and the file system and network interface that the process contacts are virtual and isolated from the local file system and network interface. Therefore, the port map between the container and the physical machine needs to be defined.

Now, from the command line of the container, press Ctrl + C to stop the Node process, then Press Ctrl + D (or enter Exit) to exit the container. Alternatively, you can terminate the container with docker Container kill.

$docker container kill [containerID]Copy the code

The container does not disappear after it is stopped. Delete the container file with the following command.

$docker container rm [containerID]Copy the code

You can also use the –rm parameter of the docker container run command to automatically delete the container files when the container stops running.


$ docker container run --rm -p 8000:3000 -it koa-demo /bin/bash
Copy the code

10.4 CMD command

In the example in the previous section, after the container is started, you need to manually enter the command node demos/01.js. We can write this command in the Dockerfile so that when the container is started, the command is already executed without having to type it manually.

FROM the node: 8.4 the COPY/app WORKDIR/app RUN NPM install - registry=https://registry.npm.taobao.org EXPOSE 3000 CMD node demos/01.jsCopy the code

CMD node demos/01.js is automatically executed when the container is started.

You may ask, what is the difference between RUN and CMD? Simply put, the RUN command is executed during the construction phase of the image file, and the execution results are packaged into the image file. CMD is executed after the container is started. Also, a Dockerfile can contain multiple RUN commands, but only one CMD command.

Note that once CMD is specified, the docker container run command cannot attach commands (such as /bin/bash previously), otherwise it overrides CMD. Now you can start the container using the following command.

$docker container run --rm -p 8000:3000 it koa-demo:0.0.1Copy the code

10.5 Publishing the Image File

After the container runs successfully, the validity of the image file is confirmed. At this point, we can consider sharing the image file online for others to use.

First, go to Hub.docker.com or cloud.Docker.com and sign up for an account. Then, log in with the following command.


$ docker login
Copy the code

Next, annotate the user name and version for the local image.

$docker image tag [imageName] [username]/[repository]:[tag] # $docker image tag koa-demos:0.0.1 Ruanyf/koa - demos: 0.0.1Copy the code

You can also rebuild the image file without labeling the user name.


$ docker image build -t [username]/[repository]:[tag] .
Copy the code

Finally, publish the image file.


$ docker image push [username]/[repository]:[tag]
Copy the code

Once published, log in to hub.docker.com and you can see the published image file.

11. Other useful commands

These are the main uses of Docker, along with several commands that are also useful.

(1) Docker container start

The docker container run command creates a new container each time it is run. Running the same command twice produces two identical container files. If you want to reuse containers, use the docker container start command, which starts container files that have been generated and stopped running.


$ docker container start [containerID]
Copy the code

(2) Docker container stop

The previous docker container kill command terminates the container, which sends SIGKILL signals to the main process inside the container. The Docker container stop command is also used to terminate the container, which is equivalent to sending a SIGTERM signal to the main process in the container, and then sending a SIGKILL signal after a period of time.


$ bash container stop [containerID]
Copy the code

The difference between the two signals is that the application, after receiving the SIGTERM signal, can do the cleanup itself, but it can also ignore the signal. If a SIGKILL signal is received, an immediate termination is forced, and all operations in progress are lost.

(3) Docker container logs

The docker container logs command displays the output of the Docker container, that is, the standard output of the Shell in the container. If the docker run command runs the container without the it parameter, use this command to see the output.


$ docker container logs [containerID]
Copy the code

(4) Docker Container exec

The docker container exec command is used to enter a running Docker container. If the docker run command does not run the container with the it parameter, use this command to enter the container. Once inside the container, you can execute commands in the Shell of the container.


$ docker container exec -it [containerID] /bin/bash
Copy the code

(5) Docker Container CP

The docker container cp command is used to copy files from the running Docker container to the local PC. Here’s how to copy to the current directory.


$ docker container cp [containID]:[/path/to/file] .
Copy the code

(after)