0x00 Opens a graph

0x01 Containerization Technology

1.1 Historical Evolution

Here is a simple BB about the development process of containerization technology, feel too long to see the actual combat can directly slide.

1.1.1 Physical Machine Era

In the era of physical machines, when our programs are developed, they need to be deployed to the server. If the project size is not large, it is ok to deploy on a single machine, but if the cluster architecture is deployed, it is difficult.

The limitations of the physical machine era are as follows:

  • Deployment is very slow and each host has to install the operating system, the environment, and various configurations required by related applications
  • The price of the server is very expensive. It is said that the two HP servers on our project cost nearly 100,000 yuan
  • Resource waste Sometimes it’s wasteful to add a server just to scale your application horizontally
  • Difficult to scale and migrate such as code warehouse migration, database migration and so on all require a lot of configuration to consider
  • Constrained by hardware

1.1.2 Virtualization Era

The virtualization era has the following characteristics:

  • Multiple deployment
  • Resource pool
  • Resource isolation
  • It’s easy to expand
  • The OPERATING system (OS) needs to be installed on the VM

Every VIRTUAL machine must have an operating system installed in order to do anything else on the virtual machine.

A virtual machine’s hypervisor is similar to a user’s application running on the host OS, such as VMware workstation. This virtualization product provides hardware, such as installing a Linux virtual machine on the machine:

The virtual machines in the picture all run independently and have an operating system installed, but they all rely on my own physical machine, and when my physical machine fails, they all have to die.

1.1.3 Containerization age

Virtualization is the isolation of physical resources, so the container can be seen as the isolation of APP.

Application scenarios of containerization technology:

  • The standardized migration development environment is packaged to o&M, and o&M expands to the same environment
  • Uniform parameter configuration Runtime parameters can be set at package time
  • Automatic deployment The image restoration process is automatically deployed without manual intervention. In our project, we use the CICD function of Gitlab to sense the code submission and use Docker to build the image. Full automation.
  • Application cluster monitoring Provides the application monitoring function to learn about cluster running status in real time
  • The communication bridge between development and operation and maintenance because of the standardized environment deployment mode, it can reduce the problems caused by unnecessary environment inconsistency. The world is much cleaner, and programmers concentrate on development!

Introduction to 0 x02 Docker

2.1 introduce Docker

Docker is an open source application container engine, developed based on Go language and open source in accordance with Apache2.0 protocol.

Docker allows developers to package applications and dependencies into a lightweight, portable container that can then be distributed to any popular Linux machine, as well as virtualization.

Containers are completely sandboxed, have no interfaces with each other and, more importantly, have very low performance overhead.

  • Open source application container engine, based on Go language development
  • The container is completely sandboxed with minimal overhead
  • A byword for containerization
  • Certain virtualization functions
  • Standardized application packaging

2.2 Installing Docker on CentOS 7

Since more than 90% of the servers are based on Linux system, we install Docker on CentOS VIRTUAL machine. These installation commands can be used as manuals for reference.

  1. Installing base Packages
yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code

Device-mapper-persistent and LVM2 are used to install data store driver packages for data storage

Yum – Utils installation kit for easy installation

  1. Setting the Installation Source
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Copy the code

Yum config-manager simplified tool for modifying the install source of yum

–add-repo sets the installation source

  1. The software package information is cached locally in advance to improve the speed of searching for installed software
yum makecache fast
Copy the code
  1. Install Docker Community Edition
yum -y install docker-ce
Copy the code
  1. Start the Docker
service docker start
Copy the code

The above are the installation steps of Docker, summarized as follows:

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum makecache fast

yum -y install docker-ce

service docker start
Copy the code
  1. Verify the Docker installation

docker -version

2.3 Pulling a Docker Image

Use docker pull to pull the Hello-world image:

docker pull hello-world
Copy the code

Use Docker Images to view images:

2.4 Start the container based on the image

Docker run Image ID <:tag> command

docker run hello-world
Copy the code

2.5 Configuring Mirror Acceleration

I can’t bear the speed of downloading image from docker central warehouse. We can accelerate it through image accelerator.

It is convenient to use Ali Cloud’s container image acceleration service.

Log in ali Cloud, search for “container image service”, and find the image accelerator:

2.6 Docker Installation Directory

Docker installation directory is var/lib/ Docker.

cd /var/lib/docker && ll
Copy the code

There are directories such as image, containers and volumes, which are related to images and containers. Here are the important concepts of Docker.

2.7 Basic Concepts

This diagram contains important concepts such as Docker containers, images, and warehouses.

The Docker client starts the container by using the Docker run command. The Docker Server checks whether there is a local image by using the Docker daemon daemon. If there is no image, the Docker repository will be pulled to the local by Docker pull. The docker run is then executed to create the container.

Docker Daemon manages images and containers.

2.7.1 mirror

An image is a read-only file that provides complete software and hardware resources for running the program. It is the “container” of the application program.

2.7.2 container

Is an instance of the image, created by Docker, isolated from each other.

2.7.3 warehouse

A Repository is a centralized place for storing images.

2.7.4 data volume

A data volume is a series of files and folders in a host that can be shared and reused between containers.

2.7.5 network

Docker bridge is a virtual host, not a real network device, the external network is not addressable. This also means that external networks cannot access containers through direct container-IP.

0x03 Docker Rapid Deployment of Tomcat

  1. Docker pull tomcat: 8.5 – jdk8 – its

: 8.5-jdk8-openJDK indicates the image tag, if not: 8.5-jdk8-openJDK default pull latest.

  1. Docker run tomcat: 8.5 – jdk8 – its

Then the Tomcat container is started, and we cannot access it through the host IP address. We need to map the container port to the host port.

Docker stop 3854be1d5f93, delete docker rm 3854be1d5f93, and start docker rm 3854be1d5f93.

Docker run -p 8090:8080 -d tomcat:8.5-jdk8 -openJDK

Just visit again.

Rapid deployment commands:

<:tags> # delete container docker rm <-f> container id # delete image docker rmI <-f> image name <:tags> # Run container docker run 8000:8080 -d image name <:tags>Copy the code

0x04 Docker Internal Structure of a Container

Once a Docker container is created, we can go inside the container and execute relevant commands to view its internal structure.

Take the Tomcat container created earlier as an example:

Viewing container IDS

docker ps -a

[root@basic ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dd1a6b408cdf hello-world "/hello" 14 Hours ago Exited (0) 14 hours ago Priceless_kapitsa 3854be1d5f93 Tomcat :8.5-jdk8- openJDK "Catalina.sh run" 25 hours ago Up 24 hours>8080/ TCP awesome_solomonCopy the code

The tomcat container id is 3854BE1D5F93

Inside the container

Docker exec [-it] container id command

  • Exec executes the command in the corresponding container
  • -it Executes commands in interactive mode

So, to get inside the Tomcat container, do this:

docker exec -it 3854be1d5f93 /bin/bash

Interactively enter the Tomcat container and open a bash terminal,

[root@basic ~]# docker exec -it 3854be1d5f93 /bin/bash
Copy the code

Automatically locates the /usr/local/tomcat directory inside the container, where we can execute some commands.

Executing script commands

A small Linux OS is built into the Tomcat container. To see the Linux version, run cat /proc/version:

Root @ 3854 be1d5f93: / usr/local/tomcat# cat/proc/version Linux version 3.10.0-957. El7. X86_64 ([email protected]) (GCC version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)) #1 SMP Thu Nov 8 23:39:32 UTC 2018Copy the code

Tomcat depends on the Java environment to run, so let’s look at the version of Java inside the container:

Root @ 3854 be1d5f93: / usr/local/tomcat# Java version - its version "1.8.0 comes with _275" its Runtime Environment (build 1.8.0_275-b01) OpenJDK 64-bit Server VM (build 25.275-b01, mixed mode)Copy the code

0x05 Container Life Cycle

Docker container life cycle mainly includes the following states:

  • stopped
  • running
  • paused
  • deleted

The state of the container is very useful in our practical work. When the container has a problem, we first look at its state to help us locate the problem.

The container life cycle state has the corresponding docker command.

  • Docker create creates a new container without starting it
  • Docker run creates a new container and runs it
  • Docker start/stop/restart Starts, stops, and restarts containers
  • Docker kill Kills a running container. Docker stop is graceful exit, exit before sending some signals, Docker internal application to do some preparatory work before exit and then exit; Docker kill is the application exit directly.
  • Docker RM deleted the container
  • Docker pause/unpause Pauses or resumes all processes in a container

0x06 Dockerfile

Dockerfile is the most important file used to build the image. Dockerfile is the description file of the image:

  • A Dockerfile is a text document that contains commands for combining images
  • Docker automatically generates images by reading instructions in Dockerfile step by step

Standard commands for building an image:

Docker build-t Builder/image name <:tags> Dockerfile directoryCopy the code

For example, if we build a simple Tomcat-based Web image, just one page, we need to prepare an HTML page and a Dockerfile image description file.

Create a myWeb directory with a test.html file:

Copy the code

Dockerfile file:

The FROM xblzer/tomcat: 8.5 MAINTAINER xblzer WORKDIR/usr/local/tomcat/webapps. ADD myweb/mywebCopy the code

TIP: MyWeb and Dockerfile should be in the same directory.

Also, this xblzer/ Tomcat :8.5 base image is something I built earlier. You can also use the official directory (404), but you may not be able to access the page resources (404). This is because the webapps directory inside the mirror is empty, but it contains the webaaps.dist directory. You need to copy the contents of webaaps.dist to webapps.

Basic commands in Dockerfile:


  • FROM image name <:tags> Build based on the base image
  • FROM Scrash does not depend on any reference image


  • Description of a mirror
  • Ex. :
MAINTAINER Xblzer LABEL Version =1.0 LABEL Description =" MAINTAINER xblzer"Copy the code


  • Set the working directory, which refers to the working directory inside the container. You need to know the structure inside the base image of FROM
  • Use absolute paths whenever possible

We just build web image, for example, specify the working directory is/usr/local/tomcat/webapps.


  • ADD and COPY are both commands for copying files
  • ADD also has the ability to remotely copy files, similar to wGET, which is less used


  • Setting environment Constants
  • Such as:
ENV JAVA_HOME=/usr/local/java
RUN ${JAVA_HOME}/bin/java -jar test.jar
Copy the code

Build the image from Dockerfile

Execute in the directory where the Dockerfile resides:

Docker build-t xblzer/ myWeb :1.0.Copy the code

Notice the “. “at the end.

[root@basic mydockerfile]# docker build -t xblzer/ myWeb :1.0. Sending build context to docker daemon 3.584kB Step 1/4: FROM xblzer/tomcat: 2.5 --> ad4eef1cdffc Step 2: MAINTAINER xblzer ---> Running in 00ff37cb7a66 Removing intermediate container 00ff37cb7a66 ---> 6675a0a2b8be Step 3/4 :  WORKDIR /usr/local/tomcat/webapps ---> Running in c753825a9dc3 Removing intermediate container c753825a9dc3 ---> cd5999c1d8ff Step 4/4 : ADD myweb./ myWeb --> 9262ba119d14 Successfully built 9262ba119d14 Successfully tagged xblzer/ myWeb :1.0Copy the code

Once built, take a look with Docker Images:

[root@basic mydockerfile]# Docker images REPOSITORY TAG IMAGE ID CREATED SIZE Xblzer/myWeb 1.0 9262ba119d14 2 minutes Xblzer/Tomcat 8.5 AD4EEF1CDFFC 17 minutes ago 537MB Tomcat 8.5- JDk8 - OpenJDK 5a5e790eb3eb 4 days ago 537MB Hello - World latest BF756fb1AE65 10 months ago 13.3kBCopy the code

The image we built, Xblzer/test-Web :1.0, appears in the list of images.

With the image, Docker run starts a container according to the traditional Docker operation way:

Docker run -d -p 8000:8080 xblzer/ myWeb :1.0Copy the code

Then visit:

This means that the container running from the image we built is fine!

0x07 Mirror Layer

Go back to the steps I took to build MyWeb:

Dockerfile file:

The FROM xblzer/tomcat: 8.5 MAINTAINER xblzer WORKDIR/usr/local/tomcat/webapps. ADD myweb/mywebCopy the code

Steps at build time:

Step 4: FROM xblzer/tomcat:8.5 --> ad4eEF1cdffc Step 4: FROM xblzer/tomcat:8.5 --> AD4eEF1cdffc Step 4: MAINTAINER xblzer ---> Running in 3bfecae6049d Removing intermediate container 3bfecae6049d ---> 927a6dcf9639 Step 3/4 :  WORKDIR /usr/local/tomcat/webapps ---> Running in e98ccfd0488c Removing intermediate container e98ccfd0488c ---> a1dcd9b4885e Step 4/4 : ADD myweb./ myWeb --> 16c0cb847216 Successfully built 16c0cb847216 Successfully tagged xblzer/ myWeb :1.0Copy the code

As you can see, each step creates a temporary image. This temporary image is a bit like a game save, so you can use this temporary container next time if it’s useful.

The temporary images one by one are the layers of images.

So let’s verify that.

Create a Dockerfile:

FROM centos
RUN ["echo", "aaa"]
RUN ["echo", "bbb"]
RUN ["echo", "ccc"]
RUN ["echo", "ddd"]
Copy the code


Docker build-t xblzer/test-layer:1.0.Copy the code

Alter Dockerfile to create a version 1.1 image:

FROM centos RUN ["echo", "aaa"] RUN ["echo", "not bbb!!"]  RUN ["echo", "not ccc!!!"]  RUN ["echo", "ddd"]Copy the code


Docker build-t xblzer/test-layer:1.1.Copy the code

As you can see, steps 1 and 2 use the previous temporary image.

Let’s use Docker History to look at the history of two images:

The first two steps for version 1.1 and the first two steps for mirror 1.0 are the same.

0x08 Communication between Docker containers

Docker provides us with great convenience in application deployment. In many cases, an application is deployed in a Docker container. For example, applications and databases can be deployed with Docker.

In this case, how does the Docker container of the application access the Docker container of the database? This involves communication between containers.

Docker container virtual IP is certainly possible, check docker container IP address can use the following command:

Docker inspect Container IDCopy the code

However, this is not usually the case in an online real environment, because the container may be misoperated and the internal IP address of the container may change, resulting in a connection failure.

The solution of Docker is to give the container a name (Docker run –name), and use the name of the container for communication between containers.

Communication between Docker containers:

  • Link Unidirectional access
  • Bridge Two-way access

Let’s create two containers to experiment with communication between containers.

8.1 Creating a Container

Create a container named Web

Docker run -d --name web tomcat: 8.5-jdk8-openJDKCopy the code

Create another container named DB

docker run -d -it --name db centos /bin/bash
Copy the code

Then use the Docker Inspect container ID command to view the virtual IP of each container.

Db container:

Web container:

Ping the DB container from the web container using the virtual IP address:

Similarly, the db container can ping through the virtual IP address of the Web container:

8.2 Link Unidirectional communication between containers

As mentioned above, we generally do not use virtual IP addresses for communication, but use the name of the container, which needs to be specified when creating the container to link to the container.

Create a Web container using –link to connect to the DB container:

Docker run -d --name web --link db tomcat: 8.5-jdk8-openJDKCopy the code

Then go inside the container and ping db directly to connect.

8.3 Bridge Bidirectional Communication

One-way communication between containers can be achieved using –link,

For example, if I don’t have the db container link to the web container, ping the web inside the DB container to see:

Sometimes we want two containers to communicate with each other. What do we do?

Having two containers link to each other works, but it can be cumbersome if there are many containers.

Docker provides a Bridge Bridge mode that allows a group of Dockers bound to the Bridge to communicate with each other.

8.3.1 Creating a Bridge and Binding a Container

1. Create a network bridge

docker network create -d bridge my-bridge
Copy the code

2. View the bridge

docker network ls
Copy the code

3. Bind the container to the bridge

Bind both the Web and DB containers to the my-Bridge created:

# Order execution
docker network connect my-bridge web
docker network connect my-bridge db
Copy the code

The db container can ping through the web container:

Add a container and also bind to the my-Bridge

docker run -d -it --name myapp centos /bin/bash
docker network connect my-bridge myapp
Copy the code

Enter the myapp container and connect to the Web and DB containers respectively:

All can be interconnected.

8.3.2 Implementation principle of the network bridge

Whenever a network bridge is created, a virtual network adapter is installed on the host to act as a gateway. The gateway formed by the virtual network adapter forms an internal path. As long as any container is bound to the virtual network adapter, it can communicate with each other.

However, the virtual network card is virtual after all, IP address is also virtual, if you want to communicate with the external, but also must and host physical network card address conversion.

All the packets sent inside the container are translated into physical packets by the virtual network adapter for external network communication.

Similarly, the data returned from the Internet goes to the physical nic first, then to the virtual NIC through address translation, and then to the virtual NIC for data distribution.

0x09 Data Is Shared between Docker Containers

9.1 Why Is Data Shared

In my current project for continuous deployment, Docker is used to build images and start containers after every code submission.

Imagine that every time you start the Docker, the logs, pictures, files and so on will be gone, which is not feasible, so you need to specify a host directory and the corresponding path inside the Docker container to correspond.

Another scenario is that multiple containers need to access some common static pages. It is also possible to put the common pages in a fixed place and let the container directory mount them.

9.2 Setting Parameters-vMount the host directory

The command format

Docker run --name Container name -v host path: specifies the image name of the mount path inside the containerCopy the code

Again, take the Tomcat container as an example.

Tomcat container inside the/usr/local/tomcat/webapps directory is mapped to the host machine/usr/webapps directory, such access tomcat page can access to the host page under/usr/webapps.

docker run -d -p 8001:8080 --name app1 -v /usr/webapps:/usr/local/ tomcat/webapps xblzer/tomcat: 8.5Copy the code

Create app-web directory in /usr/webapps and create test.html:

[root@basic webapps]# mkdir app-web
[root@basic webapps]# cd app-web/
[root@basic app-web]# vim test.html
Copy the code

HTML content:

Copy the code

At this time to visit:


<h1>222 added</h1>
Copy the code

Without restarting the docker container, visit again:

Cow force? Docker can access updated files without rebooting the Docker container!

9.3 through–volumes-fromShared container mount points

Another way to mount a directory is to create a shared container.

Creating a shared container

docker create --name commonpage -v /usr/webapps:/usr/local/ tomcat/webapps xblzer/tomcat: 8.5 / bin /true
Copy the code

/bin/true is a placeholder and has no practical meaning.

Shared container mount points

Docker run -d -p 8002:8080 --volumes-from CommonPage --name App2 xblzer/tomcat:8.5Copy the code

The purpose of the shared container is to define the mount point and then run the –volumes-from shared container name to implement the same mount directory as the shared container.

The advantage of this is that if there are many containers and the mount directory changes, you do not need to change the mount point of each container by using -v. You only need to change the mount directory of the shared container.

Create another app3 container:

Docker run -d -p 8003:8080 --volumes-from CommonPage --name App3 xblzer/tomcat:8.5Copy the code

At this time to visit and all can access to the test. The HTML page.

0x10 Docker Compose

Docker Compose is an official container orchestration tool provided by Docker. The so-called container orchestration is to organize one application (micro-service) at the network level in order to make it run according to the plan.

  • Docker Compose is a standalone multi-container deployment tool that only works on one host
  • How are multiple containers defined by YML files deployed
  • Docker Compose is required for Linux

Installation method:

Sudo curl - L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname - s) - $(uname -m)" - o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose docker-compose --versionCopy the code

However, as we all know, Kubernetes is the most popular container orchestration platform, both in production environment adoption rate and cloud native ecosystem.

So, here don’t say too much, dig a pit for yourself, keep the next water K8S article.

The first public line 100 li ER, welcome the old iron attention reading correction. GitHub github.com/xblzer/Java…