Work hard and fall asleep as if there were no connection between work and dreaming. Writing code and then deploying the application can seem like two worlds apart. But is it? This article will open Up Docker by way of Inception and let you make a substantial transition from “dreaming” to “building dreams”. In the original “dream” phase (manual configuration and deployment), everything was so random and uncontrollable that you couldn’t even remember every step you took; In the Dream stage (with Docker’s help), you can easily implement any configuration and deployment task in an automated, highly repeatable and traceable manner. Hope that after reading this article, you can also become an excellent “dream builder”!

The preparatory work

The words written in the front

Many friends told us that “a cup of tea” is just fooling people. How could you finish reading it in a cup of tea? In fact, the way of “drinking tea” varies from person to person, and each reader has a different rhythm. You can use a quick glance to finish the illustration in a few minutes. We can also choose to follow us step by step to practice, or even stop to think about some places. Although it takes more time, we believe that the investment of time must be worth it.

Second, we want to confirm if you are the intended audience for this article:

  1. If you’re already a DevOps guru who handles thousands of containers a day, I’m sorry to interrupt, but this article might be too easy for you;
  2. If you are already familiar with Docker and want more practical operation experience, this article can help you review and consolidate key knowledge points.
  3. If you’ve only heard of Docker and barely know how to use it, this article is for you! Just a friendly warning: Docker is a little difficult to get started, and you need to invest enough time to really master it. After reading this article carefully, you will definitely make considerable progress.

Finally, the structure of each section is actual practice + memory and sublimation. Recall and sublimation part is the author spent a lot of time to collect and integrate high-quality resources, and combined with their own experience in using containers, I believe it can further deepen your understanding, if you are in a hurry, you can also skip oh.

PS: This article does not solemnly explain the background, concepts and advantages of Docker like the regular Docker tutorials (you may have heard the HHH), but directly understand Docker through practice. In the end, we will still post the classic Docker architecture diagram, combined with the previous operation experience, I believe you will have a clear feeling.

The premise condition

Before you officially read this article, we hope you have the following conditions:

  • Basic command line experience
  • Knowledge of computer networks, especially the concept of ports in the application layer
  • Better to experience the agonizing struggle of deploying environments and projects 😭

What will we achieve

Now assume you have a React written “dream list” project at hand, as shown in this GIF below:

In this article we will walk you through the step-by-step process of containerizing the application with Docker, using Nginx server to serve the static pages built.

You will learn

This article will not deal with…

Of course, as an introductory tutorial, this article will not cover the following advanced content:

  • Docker network mechanism
  • Data volume and Bind Mount implement data sharing
  • Docker Compose
  • Multi-stage Build
  • Docker Machine tools
  • Container choreography technologies such as Kubernetes and Docker Swarm

The above advanced knowledge we will launch the relevant tutorial immediately, please look forward to.

Install the Docker

We recommend each platform to install Docker in the following way (after we repeatedly tested oh).

Windows

Win7/8 and Win10 different recommended installation methods. Notice You are advised to enable hyper-V virtualization in Windows 10.

macOS

You can download and install the DMG file by clicking the official download link (copy the link to Xunlei if it is slow). Once installed, click the Docker app icon to open it.

Linux

For major Linux distributions (Ubuntu, CentOS, etc.), we recommend using the official script to install it quickly and easily:

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Copy the code

We recommend that you transfer docker privileges to non-root users so that you don’t need to sudo every time you use docker:

sudo usermod -aG docker $USER
Copy the code

This will take effect after the user is logged out or restarted. Then configure Docker to boot from systemd service:

sudo systemctl enable docker
Copy the code

Configuring the Mirror Warehouse

The default image warehouse Docker Hub is in foreign countries, domestic pull speed is more touching. You are advised to configure mirror acceleration by referring to this article.

Mirrors and Containers: The architect’s drawings and dreams

Image and Container are the two most basic and key concepts in Docker. The former is a dream architect’s drawing. According to the content of this drawing, completely predictable dreams can be generated (the latter).

prompt

If you find this analogy hard to follow, think of object-oriented programming in terms of class, which is a mirror image, and instance, which is a container.

Test: Where dreams begin

Having touched on the basic concepts of mirrors and containers, we’re going to pause the theory and start with a series of small experiments to give you a quick feel for it.

Experiment 1: Hello World!

By historical convention, let’s run Hello World from Docker with the following command:

docker run hello-world
Copy the code

The output is as follows:

Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 1b930d010525: Pull complete Digest: sha256:fb158b7ad66f4d58aa66c4455858230cd2eab4cdf29b13e5c3628a6bfc2e9f05 Status: Downloaded newer image for hello-world:latest Hello from Docker! .Copy the code

Doesn’t it just print a string and exit? Docker does the following for us:

  1. Check whether there is a specified localhello-world:latestMirror (latestIf not, go to step 2; otherwise, go to step 3
  2. Locally not specified (Unable to find XXX locally), downloaded locally from the Docker Hub
  3. According to the localhello-world:latestThe image creates a new container and runs the programs in it
  4. When finished, the container exits and control is returned to the user

Experiment 2: Run an Nginx server

Too easy? Let’s try something more advanced: running an Nginx server. Run the following command

docker run -p 8080:80 nginx
Copy the code

After you run it, you’ll find that it’s stuck and there’s no output, but rest assured that your computer hasn’t crashed. Let’s open a browser and go to localhost:8080:

A simple docker run command is used to install and deploy the Nginx server. Yes, you can continue to access routes that don’t exist, such as localhost:8080/what, which will also prompt 404. When we look at the output of the Docker container, there is content (server log) :

To summarize what Docker just did:

  1. Check whether there is a specified localnginx:latestMirror image (aboutlatestTag, more on that later), if not, go to step 2, otherwise go directly to Step 3
  2. Locally not specified (Unable to find XXX locally), downloaded locally from the Docker Hub
  3. According to the localnginx:latestThe image creates a new container,And through the-p(--publishThe) parameter establishes a mapping between port 8080 of the native and port 80 of the containerAnd then run the program
  4. The Nginx server program stays running and the container does not exit

prompt

The format of the port mapping rule is < local port >:< container port >. Nginx containers have port 80 open by default. By setting port mapping rules for 8080:80, you can access localhost:8080 from the local machine (outside the container), or even from Intranet IP within the same LAN, as demonstrated at the end of this article.

Experiment 3: Run Nginx in the background

It looks cool, but for processes like the Nginx server, we prefer to leave it running in the background. Press Ctrl + C to exit the current container, then run the following command again:

docker run -p 8080:80 --name my-nginx -d nginx
Copy the code

Note that unlike before, we:

  • I added a parameter--nameIs used to specify the container name asmy-nginx
  • I added an option-d(--detach), indicating “background running”

warning

The name of the container must be unique, and the creation will fail if another container with the same name already exists (even if it is no longer running). If this happens, you can delete containers that you don’t need before (how to do that is explained below).

Docker outputs a long string of 64-bit container ids and gives us control of the terminal. Try localhost:8080 and see the familiar Welcome to nginx! , the server is really running in the background.

So how do we manage this server? Like the familiar UNIX ps command, the docker ps command allows us to view the state of the current container:

docker ps
Copy the code

The output looks like this:

prompt

Due to the wide output of Docker PS, if you feel the result is not intuitive, you can lengthen the terminal (command line), as shown below:

From this table, we can clearly see some information about the Nginx server container running in the background:

  • Container ID is0bddac16b8d8(It may be different on your machine)
  • The Image used isnginx
  • Run the Command asnginx -g 'daemon of...This is the Nginx image with the run command, don’t worry about temporarily
  • Created in 45 seconds ago
  • The current Status is Up in 44 seconds.
  • Ports are0.0.0.0:8080 - > 80 / TCP, meaning access to the local0.0.0.0:8080All requests are forwarded to the container’s TCP 80 port
  • The name (Names) is what you just specifiedmy-nginx

Docker stop docker stop docker stop docker stop docker stop docker stop docker stop

docker stop my-nginx
# docker stop 0bddac16b8d8
Copy the code

Pay attention to

If you specify a container ID, be sure to use the actual ID on your machine. In addition, it is possible to write only the first few characters of an ID without conflicts, such as 0bd.

Experiment four: interactive operation

After the Nginx server addiction, let’s take a look at another way to open Docker containers: interactive. Let’s enter an Ubuntu image by running the following command:

docker run -it --name dreamland ubuntu
Copy the code

You can see that we have added the it option, which specifies both -i (–interactive) and -t (–tty, assign an emulated terminal). The command output is as follows:

Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
2746a4a261c9: Pull complete
4c1d20cdee96: Pull complete
0d3160e1d0de: Pull complete
c8e37668deea: Pull complete
Digest: sha256:9207fc49baba2e62841d610598cb2d3107ada610acd4f47252faf73ed4026480
Status: Downloaded newer image for ubuntu:latest
root@94279dbf5d93:/#
Copy the code

Wait, how did we get thrown into a new command line? That’s right, you’re now in the “dream” of this Ubuntu image, where you can “walk” around and run commands:

root@94279dbf5d93:/# whoami
root
root@94279dbf5d93:/# ls
bin   dev  home  lib64  mnt  proc  run   srv  tmp  var
boot  etc  lib   media  opt  root  sbin  sys  usr
Copy the code

For example, if we run the whoami and ls commands on it, you can almost be sure that you are now in the “dream” (container). Open a new terminal (command line) and run the docker ps command to see the Ubuntu image in action:

Return to the previous container and press Ctrl + D (or enter the exit command) to exit. You can check again if the container has been closed by looking at the Docker PS terminal before.

Destroy containers: Listen to the sound of dreams breaking

Dream architects will inevitably fail, and the Docker container we created just now is just for preliminary exploration and will not be used in the future. Since the Docker container is stored directly on our local hard disk, timely cleaning the container can also reduce the pressure on our hard disk. We can view all containers (including those that have been stopped) with the following command:

docker ps -a
Copy the code

-a (–all) displays all containers. If not, only running containers are displayed. You can see the output as follows (I’ve widened the terminal here for your convenience) :

prompt

As you may have noticed, in experiments 1 and 2 we didn’t specify a container name. Docker gave us a rather interesting default container name (such as hardcore_nash) in the form of a random adjective with the last name of a famous scientist/programmer (with any luck, You might see torvalds, the father of Linux.

Similar to the rm command in Shell, we can destroy the container through docker rm command, for example, delete the Dreamland container we created earlier:

docker rm dreamland
Or specify the container ID. Remember to replace it with your own container ID
# docker rm 94279dbf5d93
Copy the code

But what if we want to destroy all the containers? It is not convenient to enter docker rm again and again. You can easily delete all containers by using the following command:

docker rm $(docker ps -aq)
Copy the code

Docker ps -aq will output all containers ID, and then as a parameter to the Docker rm command, can delete all containers according to the ID.

Danger!

Always check to see if there are any valuable containers (especially business data) before you do this, because once deleted containers cannot be recovered (we are not talking about hard drive recovery here)!

Memory and sublimation

About Port Mapping

For those of you who still don’t fully understand the concept of “port mapping”, the 8080:80 mapping rule can be used as an example of the “portal” analogy (the picture below is the portal 2 game cover) :

If you compare the container to a “dream” and the native environment to a “reality”, by setting up a port mapping, requests to access port 8080 of the native machine will be “routed” to port 80 of the container, isn’t it amazing?

Container Life Cycle: Dream map

After following through the above four little experiments, you probably have a very intuitive feeling and understanding of Docker containers. Is offering out this piece of ten (sang) points (xin) by (bing) standard (kuang) life cycle diagram the Docker container (source: Docker – Saigon. Making. IO/post/Docker…

At first glance, this image is visually stunning and can even make you feel overwhelmed. That’s okay, let’s look at the four categories of elements in this diagram:

  1. Container states (colored circles) : Created, Running, Paused, Stopped, and Deleted
  2. Docker command(On the arrowdockerOpening words) : includingdocker run,docker create,docker stop, etc.
  3. The event(Rectangular box) : Includescreate,start,die,stopThere areOOM(Running out of memory), etc
  4. There is also a conditional judgment that determines whether the container needs to be restarted based on the Restart Policy

OK, this picture is still hard to understand, but remember the four little experiments we did? There are actually two paths (and the most common paths in daily use) that we take, and we’ll look at each of them.

First Path (Natural end)

As shown above:

  • Let’s go throughdocker runCommand to create and start a container directly into theRunning state(Running)
  • When the program finishes running (e.g., after printing Hello World, or terminating the program with Ctrl + C), the container dies.
  • We went straight to the Stopped state because we did not set a restart policy.
  • At last,docker rmOrder to destroy the container and enterDeleted state(does)

Second Path (Forcibly End)

  • We still passdocker runCommand to create and start a container directly into theRunning state(Running)
  • Then throughdocker stopThe command kills the program in the container (die) and stops the container (stop), finally entering theStop state(Stopped)
  • At last,docker rmOrder to destroy the container and enterDeleted state(does)

prompt

Some sharp-eyed readers may notice that docker Kill and Docker Stop function very similar, with slight differences previously: The kill command sends a SIGKILL signal (or other specified signals) to the programs running in the container, while the stop command sends a SIGTERM signal before a SIGKILL signal, which is a Graceful Shutdown.

One shortcut: Delete the running container

There is a shortcut not drawn in the lifecycle diagram: from running (or paused) to being deleted, this can be achieved by adding the option -f (–force) to the docker rm command:

# Assuming Dreamland is still running
docker rm -f dreamland
Copy the code

Similarly, we can delete all containers, regardless of their state:

docker rm -f $(docker ps -aq)
Copy the code

Free inquiry

Feel free to explore other routes we haven’t taken, such as trying to restart a previously stopped container (Docker start) or suspending a running container (Docker Pause). Fortunately, the Docker help command can provide us with a compass to explore, for example if we want to know how to use the start command:

$ docker helpstart Usage: docker start [OPTIONS] CONTAINER [CONTAINER...]  Start one or more stopped containers Options:-a. --attach Attach STDOUT/STDERR and forward signals --checkpoint string Restore from this checkpoint --checkpoint-dir string Use a custom checkpoint storage directory --detach-keys string Override the key sequencefor
                                detaching a container
  -i, --interactive             Attach container's STDIN
Copy the code

Now that you’ve read this, you know how to create and manage containers with existing images. In the next section, we’ll take you through creating your own Docker image and start becoming a standard “dream builder”!

The first application of containerization: start the dream building journey

In the previous step, we experimented with images that others had prepared for us (such as Helo-World, Nginx, and Ubuntu), which can be found in the Docker Hub image repository. In this step, we’ll begin the dream journey: learn how to containerize your application.

As stated at the beginning, we will container a full-stack dream list application, run the following command to get the code, and then enter the project:

git clone -b start-point https://github.com/tuture-dev/docker-dream.git
cd docker-dream
Copy the code

In this step, we will container the React front-end application and use Nginx to provide access to the front-end page.

What is containerization

Containerization consists of three stages:

  • Write code: We’ve provided written code
  • Mirror Building: This is the core of this section, which is covered in more detail below
  • Create and run containers: Run our application as a container

Build the mirror

There are two main ways to build Docker images:

  1. manual: Creates and runs a container based on an existing image, enters it to make changes, and then runsdocker commitCommand to create a new image based on the modified container
  2. automatic: Creates a Dockerfile file, specifies the command to build the image, and passesdocker buildCommand to create a mirror

Due to space constraints, this article will only cover the second most widely used method of creating an image.

Some preparatory work

Let’s start by building the front-end project client as a static page. Make sure you have Node and NPM installed on your machine (click here to download, or use NVM), then go to the Client directory, install all dependencies, and build the project:

cd client
npm install
npm run build
Copy the code

After a while, you should see the Client/Build directory, which holds the front-end static pages we’re going to show you.

Create Nginx configuration file client/config/nginx.conf

server { listen 80; root /www; index index.html; sendfile on; sendfile_max_chunk 1M; tcp_nopush on; gzip_static on; location / { try_files $uri $uri/ /index.html; }}Copy the code

If you are not familiar with Nginx configuration, don’t worry, just copy and paste it. The configuration above means roughly: listen on port 80, web root directory in/WWW, front page file is index. The HTML, if the access/provided file index. The HTML.

Create Dockerfile

Then comes the most important code of this step: Dockerfile! Create client/Dockerfile as follows:

FROM nginx:1.13

# delete default configuration of Nginx
RUN rm /etc/nginx/conf.d/default.conf

# Add custom Nginx configuration
COPY config/nginx.conf /etc/nginx/conf.d/

Copy the front-end static files to the/WWW directory of the container
COPY build /www
Copy the code

You can see that we are using three directives in the Dockerfile:

  • FROMUsed to specify the base image, where we base onNginx: 1.13The image serves as the starting point for the build
  • RUNCommands are used to run any command inside the container (provided the command exists, of course)
  • COPYThe Dockerfile command is used to copy files from the directory where the Dockerfile resides to the path specified by the container

It’s time to build our image by running the following command:

# if you are already in the client directory
# (Note the dot at the end, which represents the current directory)
docker build -t dream-client .

If you go back to the project root directory
docker build -t dream-client client
Copy the code

You can see that we specified -t (–tag, container tag) as dream-client, and finally specified the context directory to build the container (i.e. the current directory. Or the client).

After running the above command, you will find:

Sending Build Context to Docker Daemon: 66.6MBCopy the code

And that number keeps getting bigger, like a scene from a hacker sci-fi movie, and it should end up at around 290MB. A series of steps (four) are then run, and the image build is successful.

Why is the Build Context so big? Because we added node_modules that are “heavier” than “black holes”! (I can’t help but remember this picture.)

Ignore unwanted files with.dockerignore

Docker provides a.gitignore-like mechanism that allows us to ignore specific files or directories when building an image. Dockerignore creates the client/.dockerignore file (note the dot before dockerIgnore) :

node_modules
Copy the code

Quite simply, we just want to ignore the dreaded node_modules. Run the build command again:

docker build -t dream-client .
Copy the code

That’s great! This time it was only 1.386MB, and it was significantly faster!

Run the container

Finally, the last step in containerization — create and run our container! Run the dream-Client image you just created with the following command:

docker run -p 8080:80 --name client -d dream-client
Copy the code

Similarly, we still set port mapping rule as 8080:80, container name as client, and set it to run in the background through -d. Then go to localhost:8080:

Success! The three dreams set at the beginning have all been fulfilled!

prompt

We can even access the dream list from the Intranet. If you run Linux or macOS, run the ifconfig command on the terminal to query the local Intranet IP address. If you run Windows, run the ipconfig command to query the local Intranet IP address. It usually starts with 10, 172.16-172.31, or 192.168. For example, if my internal IP address is 192.168.0.2, you can access 192.168.0.2:8080 from other devices (such as your mobile phone) on the same LAN (usually WiFi).

Memory and sublimation

About Mirror Labels

In practice, you may have noticed that Docker always tags us with “latest” when pulling and building images, which means “latest”. Like versioning of software, images can also be “versioned” by labeling.

Pay attention to

Latest literally means “latest,” but it’s just a generic label that doesn’t guarantee it’s really “latest,” and it doesn’t automatically update. See this article for more discussion.

In fact, it is perfectly possible to specify labels when pulling or building images (generally considered a good practice) :

Docker pull nginx:1.13 Docker build-t dream-client:1.0.0Copy the code

You can also label an existing image:

# Make the latest image of the default newest tag
docker tag dream-client dream-client:newest
You can even change the image name and tag at the same timeDocker tag dream - client: 1.0.0 dream - client2: the latestCopy the code

As you can see, the tag doesn’t have to be a version, it can be any string (hopefully meaningful, otherwise after a while you won’t remember what the container with the tag does).

About Dockerfile

Dockerfile is actually the default name, we could have chosen a different name, such as myDockerfile, and then specified -f (–file) when building the image:

docker build -f myDockerfile -t dream-client .
Copy the code

Here are two classic usage scenarios:

  1. For example, in Web development, create them separatelyDockerfile.devFor building development images, createDockerfile.prodBuild mirrors in production;
  2. Create while training the AI modelDockerfile.cpuFor building cpu-trained images, createDockerfile.gpuBuild gPU-trained images.

Think again about the image versus the container

After the containerization practice, I believe you have a new understanding of the relationship between mirror image and container. Take a look at the chart below:

In the previous test session (marked with green arrow), we:

  1. throughdocker pullPull image from Docker image repository to local
  2. throughdocker runCommand to create and run a container based on the image
  3. throughdocker stopSuch commands operate on the container, causing it to undergo various state transitions

In this section of containerization (marked with red arrows), we:

  1. throughdocker buildCommand to build an image from a Dockerfile file
  2. throughdocker tagCommand to label the image and get a new image
  3. Passeddocker commitCommand to convert an existing container to an image

A panoramic view of the Docker architecture

Time to pull out the classic Docker architecture diagram:

As can be seen, Docker follows the classic client-server architecture, and its core components include:

  • The server (also known as the Docker daemon), in LinuxdockerdThe command
  • The REST API exposed by the server provides an interface to communicate and operate with daemons
  • The client (that is, the command line program we have been usingdocker)

This is the end of the Docker quickstart tutorial, hopefully you have a preliminary understanding of the concept and use of Docker. Later we will also publish Docker advanced content (such as Network Network, Volume data Volume, Docker Compose, etc.), hand in hand with you to deploy a full stack application (front and back end and database) to the cloud host (or any machine you can log in), please look forward to ~

Want to learn more exciting practical skills tutorial? Come and visit the Tooquine community.