Thanks & Reference

The content of this paper is relatively shallow, and Docker does not involve the content of distribution and cluster, so this paper recommends front-end students to have a look, but back-end students do not recommend. All the commands in this article are for Ubuntu16.04, so be aware when copying and pasting. (There are some rereaders for this article 🎺, and those interested in Docker can see the following resources directly)

This article mainly refers to the following information

  • Docker docs Official docker documents
  • Docker Introduction study notes
  • Docker – from beginning to practice
  • DockerInfo
  • Docker Learning Roadmap (with Practice tutorial based on Ali Cloud Container Service)

Command summary

Command summary, quick query aspects


#Create a mirror imageDocker build-t [image name].#Docker image list
docker image ls
#Remove the mirror
docker rmi [id]
#Delete all mirrors
docker image rm $(docker image ls -a -q)

#Docker container list
docker container ls
docker container ls --all
#Stop all containers
docker container ls -aq
#Remove the container
docker rm [id]
#Delete all containers
docker container rm $(docker container ls -a -q)
#Stop the container
docker container stop [id]
#Start the stopped container
docker container start [id]
#Forces the specified container to close
docker container kill [id]
#Restart the container
docker container restart [id]
#Into the containerDocker exec-it [container ID] bash

#Run the container and map the external port 4000 to port 80 of the container
docker run -p 4000:80 hello
#Specify the container name --name
docker run --name [name] -p 4000:80 [image]
#Daemon running container (running in background, no need to open a terminal)
docker run -d -p 4000:80 hello
#Maps ports of the local host to ports of the container randomly
docker run -d -P [image]
#Map all addressesDocker run -d -p [host port]:[container port] [image]#Maps the specified address and portDocker run -d -p [IP]:[host port]:[container port] [image]#Maps any port of the specified addressDocker run -d -p [IP]::[container port] [image]#View the ports mapped to the containerDocker port/container name | container id] [container port]

#Tag imagesDocker tag [image name] [user name]/[repository]:[tag]#Upload the image to DockerHubDocker push [user name]/[repository]:[tag]#Get the image from DockeerHubDocker pull [repository]:[tag]#Run the image from the repositoryDocker run -p [user name]/[repository]:[tag]
#Creating a Data VolumeDocker volume create [Data volume name]#View all data volumes
docker volume ls
#View information about a data volumeDocker volume inspect [data volume name]#Deleting a Data VolumeDocker volume rm [Data volume name]#None Example Clear data volumes that have no master
docker volume prune

#View the network list
docker network ls
Copy the code

Basic concepts of Docker

Docker virtualization is implemented at the system level, while virtual machines are implemented in hardware.

The mirror

Docker Images is an executable package. Contains everything that runs the application, code, runtime, environment variables, libraries, and configuration files.

Composition of mirror image

The construction of mirror image is built layer by layer, the former layer is the foundation of the latter. After each layer is built, there is no change. Subsequent modifications will only occur at the current mirror layer. Deleting files from the previous layer, for example, is not a true deletion. Instead, the file is marked for deletion in the subsequent mirror layer, and the deleted file remains in the mirror.

The hierarchical nature of the image makes it easy to extend and reuse. Like the various basic images available on Docker Hub.

commit

We said above that mirrors are layered. Let’s take a closer look at what mirrors are made of using the commit command.

Docker run –name webserver -d -p 4880:80 nginx Use exec to go into the WebServer container and make some changes. The docker commit command saves our changes to the container storage layer as a new image. The new image is made up of the original image, plus our changed storage layer.

The container

Docker Containers are instances of mirror-running Containers. You can use Docker Ps to see a list of running containers. Containers are also multi-tier storage, with mirroring as the base layer and another layer of containers running on top of the base layer.

Docker installation

Uninstall the older version of Docker


sudo apt-get remove docker docker-engine docker.io containerd runc
Copy the code

The installation


#To update the apt
sudo apt-get update

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

#Add the official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

#Add the repository to the APT source
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

#To update the apt
sudo apt-get update

#The installation
sudo apt-get install docker-ce docker-ce-cli containerd.io
Copy the code

Verify the installation


#Check out the Docker version
docker version
Copy the code


#Allow hello-World mirroring to verify that the installation is correct
docker run hello-world
Copy the code

The container

In the past, if you were to write Python applications, you would need to install the Python runtime on your machine, and you would need to configure the environment not only on your development machine, but on the production machine as well. If you use Docker, you can image the Python runtime without having to install the environment repeatedly on different machines. It can be packaged with Python runtime images in the application. Make sure it works on different machines. These portable images are defined by Dockerfile

Dockerfile

Dockerfile defines the environment inside the container. The container is isolated from the rest of the system, so the container’s ports need to be mapped externally. The build of the application defined by this Dockerfile behaves exactly the same wherever it runs.

The sample


#Create an empty folder and create a Dockerfile file in the folder
mkdir learn-docker
cd learn-docker
touch Dockerfile
touch app.js
Copy the code

Dockerfile


#Write the following to the Dockerfile
vim Dockerfile

#Use Node as the parent image
FROM node
#Set the working directory of the container to /app(current directory, if /app does not exist, WORKDIR will create /app folder)
WORKDIR /app
#Copy everything in the current folder to the /app of the container
COPY . /app
#Install the node package
RUN npm install 
#The container exposes port 80
EXPOSE 80
#The environment variable
ENV NAME World
#App.js is run when the container starts
CMD ["node", "app.js"]
Copy the code

app.js


const express = require('express')
const app = express()

app.get('/'.function (req, res) {
  res.send('hello world')
})

app.listen(80.'0.0.0.0')
Copy the code

We don’t need to install Python, Flask, or Redis on our system. You do not need to install them when you build running images. It doesn’t look like we’re building a development environment with Pyhone, but we are.

Building the application

Use the docker build command to build the image. (The –tag option will name the image)

#Build an image of HelloDocker
docker build --tag=hellodocker .

#After the build is complete, we look at the list of mirrors
docker image ls
Copy the code

Running the application


#Map port 4000 of the server to port 80 of the container
docker run -p 3999:80 hellodocker

#View the running containers
docker container ls

#Curl test, return helloWorldThe curl 0.0.0.0:3999Copy the code

Dockerfile instruction in detail

🌟 the FROM

The FROM directive is used to specify the base mirror of the mirror. FROM Scratch, you can specify an empty base image.

🌟 RUN

Each directive in a Dockerfile creates an image, and the RUN directive should not be written as a shell script.


FROM scratch

#This creates an extra 7 layers of mirror, which is the wrong behaviorRUN apt-get update RUN apt-get install -y gcc libc6-dev make wget RUN wget -O redis.tar.gz "Http://download.redis.io/releases/redis-5.0.3.tar.gz" RUN the mkdir -p/usr/SRC/redis RUN tar - XZF redis. Tar. Gz - C /usr/src/redis --strip-components=1 RUN make -C /usr/src/redis RUN make -C /usr/src/redis install

#The correct way to write it is to use && to concatenate commands into a layer of mirrorsRUN buildDeps='gcc libc6-dev make wget' \ && apt-get update \ && apt-get install -y $buildDeps \ && wget -O redis.tar.gz "Http://download.redis.io/releases/redis-5.0.3.tar.gz" \ && mkdir -p/usr/SRC/redis \ && tar - XZF redis. Tar. Gz - C /usr/src/redis --strip-components=1 \ && make -c /usr/src/redis \ && make -c /usr/src/redis install \ # Avoid Docker bloat && rm -rf /var/lib/apt/lists/* \ && rm redis.tar.gz \ && rm -r /usr/src/redis \ && apt-get purge -y --auto-remove $buildDepsCopy the code

COPY

The COPY directive copies files from the current directory into image.

The source path refers to the directory of the current context. The target path can be either an absolute path path within the container or a relative path to the working directory specified in the WORKDIR container.

COPY [source path] [Destination path]Copy the code

CMD

CMD specifies the start command for the container master process.

#The use of the node
CMD ["node", "app.js"]

#The use of pm2
# http://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/#docker-integration 
RUN npm install pm2 -g
CMD ["pm2-runtime", "app.js"]
Copy the code

VOLUME

The VOLUME command can specify a directory as an anonymous VOLUME. Any writes to the directory are not recorded to the container’s storage layer.

For databases, database files should be stored in data volumes



VOLUME /data
Copy the code

ENV

ENV is used to set environment variables, which can be used either in directives following Dockerfile or in code

# Dockerfile
#The environment variable
ENV NAME World
Copy the code
// app.js
const express = require('express')
const app = express()

app.get('/'.function (req, res) {
  // Use environment variables
  res.send(`hello world${process.env.NAME}`)
})

app.listen(80.'0.0.0.0')
Copy the code

EXPOSE

The EXPOSE directive is used to declare ports, but the ports to be declared are different from docker Run < Host port >:< container port >. The EXPOSE directive is just a declaration and does not automatically map ports.

WORKDIR

WORKDIR is used to specify the current directory (working directory), Dockerfile is not a shell script, remember that.


#This is the wrong example
RUN cd /app
RUN echo "hello" > world.txt
Copy the code

The /app/world.txt file will not be created. Because the environment in which the two lines of RUN are executed in the Dockerfile is different. So layer 1 CD /app does not affect layer 2’s current directory.


WORKDIR /app
RUN echo "hello" > world.txt
Copy the code

Share your mirror image

What is a DockerHub?

DockerHub is similar to Github, a public container image repository officially maintained by Docker. We first sign up and log on to Docker Hub

Creating a repository

Tag images


#The login
docker login

#Tag images
#Docker tag [image name] [user name]/[repository]:[tag]
docker tag hellodocker zhangyue9467/learn-docker:test
Copy the code

Release the mirror


docker push zhangyue9467/learn-docker:test
Copy the code

Docker Hub repository will have our published images

Pull and run the image from the DockerHub

With Docker, we don’t need to install anything on other machines to run it. You just need to remotely pull the Docker image


docker run -p 3998:80 zhangyue9467/learn-docker:test
Copy the code

Data volume

What is a data volume?

A data volume is a special directory that can be used by one or more containers. The data in a data volume can be shared and reused between containers. Changes to data volumes take effect immediately.

Creating a Data Volume


#Create a data volume named vol
docker volume create vol

#View information about a data volume
docker volume inspect vol 
Copy the code

Mountpoint indicates the location where the data volume is mounted to the host. We create a file in the folder corresponding to the Mountpoint field

Start the container to which the data volume is attached

Use –mount to mount data volumes when the container is started. Multiple data volumes can be mounted when the container is started.


#The container with name Web is started
#Use the VOL data volume, loaded into the container's /webapp

docker run -d -P \
    --name web \
    --mount source=vol,target=/webapp \
    hello
Copy the code

The contents of vol data volume are mounted to the /webapp directory in the Web container

Mount the host directory as the data volume

The host directory must be an absolute path. Use –mount if the host directory does not exist, Docker will report an error. Docker has read and write permissions on host directories by default.


#The container with name Web2 is started
#Use the local /var/www/vol directory as the data volume and load it into the container's /webapp

docker run -d -P \
    --name web2 \
    --mount type=bind,source=/var/www/vol,target=/webapp \
    hello
Copy the code

Mount local files as data volumes


#/ root/bash_history as volumes

docker run -d -P \
    --name web3 \
    --mount type=bind,source=/root/.bash_history,target=/root/.bash_history \
    hello
Copy the code

External command line history can be obtained from inside the container

network

External access container


#Maps any port to a container port
docker run -d -P [image]

#Map all addresses
# docker run -d -p 5000:5000 webDocker run -d -p [host port]:[container port] [image]
#Maps the specified address and port
# docker run -d5000-5000 - p 127.0.0.1: webDocker run -d -p [IP]:[host port]:[container port] [image]
#Maps any port of the specified address
# docker run -d- p 127.0.0.1: : 5000 webDocker run -d -p [IP]::[container port] [image]Copy the code

View container mapping port configuration


#View the ports mapped to the containerDocker port/container name | container id] [container port]Copy the code

The container has its own network and IP, which can be obtained in “NetworkSettings” using the Docker inspect command.


#View the IP address information inside the containerDocker inspect [container ID]Copy the code

Container communication

Use custom Docker networks for container communication. For multiple containers, you can use Docker Compose to communicate with each other. By default, all containers in Docker Compose are in the same network.


#Create a network
docker network create -d bridge mynet

#Link the container to the network Mynet
docker run -d -p 5000:8888 --name busybox1 --network mynet hello
docker run -d -p 5001:8889 --name busybox2 --network mynet hello2

#Inside the container BusyBox1, you can use curl or ping to test

#IP address of BusyBox2The curl 172.19.0.3:8889# 
ping busybox2
Copy the code

Docker Compose

“Compose”?

It’s easy to define a container using a Dockerfile file. But in day-to-day work a project may require multiple containers (front end, back end, database). Compose allows the user to define the docker-compose. Yml template file to define a set of associated containers as a set of projects.

Two concepts in Compose:

  • A service, an application container, can be multiple instances of the same image.
  • Project, a complete business unit composed of a set of associated application containers, is defined in docker-compose. Yml

Compose the installation

Sudo curl -l https://github.com/docker/compose/releases/download/1.17.1/docker-compose- ` ` uname - s - ` uname -m ` > /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-composeCopy the code

Verify the Docker Composc installation


#Check the version

docker-compose --version
Copy the code

Compose a command

For more commands, see.

💡 Before introducing the Compose command, I think it’s important to clarify the concept of a service and a container. I have confused them, please refer to the detailed solution

In docker-compose, the service is defined in docker-compose. Ymal, and a service named Web is defined below. The Web service starts a container called “[project file name]_web”.

# docker-compose.ymal

version: '3'
services:
  web:
    build: .
    ports:
     - "5000:3000"
Copy the code

build

Run the build command in the project’s directory root to build the image


#Build a container
docker-compose build
Copy the code

ps

Run the ps command at the root of the project to list all the containers in the project


docker-compose ps
Copy the code

up

The up command builds the container, creates the service, starts the service, and so on. You can start a project directly with the up command


#Start the container in the foreground
docker-compose up

#Start and run the project in the background (no longer forced to exit the console)
docker-compose up -d
Copy the code

port

View the ports mapped by the service on the host

version: '3'
services:
  web:
    build: .
    ports:
     - "5000:3000"
Copy the code

#The sample
# 0.0.0.0:5000Docker-compose port web[service] 3000Copy the code

Compose template file

Please refer to more instructions


version: '3'
services:
  #The web serviceWeb: # name of the container container_name: hello_compose # location of the Dockerfile file Expose: - "3000" # Expose port :[host port] - "5000-3000" # data volume mount path # https://forums.docker.com/t/making-volumes-with-docker-compose/45657 volumes: - [host path]:[container path]  #The db servicesDb: # image: "redis:alpine"Copy the code

In actual combat

Docker deploys front-end applications

  • Preview the address
  • The source address

Create a New Jenkins task and pull the Github project into an empty folder on the online cloud server.

Next, define the Dockerfile custom image. Use the FROM directive to make nginx the parent image, and use the COPY directive to COPY all the contents of the context directory into the /var/www/hello_docker/ directory of the container. /var/www/hello_docker/ is the static file directory we configured in the nginx configuration. Then use the COPY command to COPY the nginx configuration file to /etc/nginx/conf.d/. The contents of the nginx configuration file in the conf.d folder are merged into the main nginx configuration file. The nginx service is then restarted using the RUN command.

After customizing our image with Dockerfile, we need to build our image with the build command. Due to the need to automate the operation and maintenance, starting our image directly may cause errors (there may be images with the same name). We use a shell script to determine whether we need to delete the previous image or start the container directly. Finally, use the run command to build our container.

# Dockerfile

FROM nginx
COPY ./* /var/www/hello_docker/
COPY ./nginx/hello_docker.conf /etc/nginx/conf.d/
RUN service nginx restart
Copy the code
# nginx.conf
server {
    listen 8888;
    server_name localhost;

    root /var/www/hello_docker;
    index index.html;
    expires      7d;
}
Copy the code

After the container is built, we cannot directly access the container mapping interface locally. We need to configure the Nginx agent on the ☁️ cloud server to access the container.

Docker deploies the Node service

  • Preview the address
  • The source address

The front-end deployment is the same as the previous project (skipped here). Use Dockerfile to define the back-end service image, use node as the parent image using the FROM directive, use the RUN directive to install pm2 globally, and use CMD to start the back-end service using pm2.


FROM node

WORKDIR /server

COPY . /server
    
EXPOSE 8888

RUN npm install pm2 -g

CMD ["pm2-runtime", "app.js"]
Copy the code

Docker deployment Mongo

We deploy the Mongo database directly using Docker-Compose.

It is important to note that mongo data is stored in a location where it is not recommended to store data directly into a container. Instead, use volumes to mount the storage directory of the database in the container to the directory on the host

Version: '3.1' services: mongo: # Use docker Hub's mongo image: mongo /etc/mongod.conf' /etc/mongod.conf' - '/etc/mongod.conf:/etc/mongod.conf' - '/var/lib/mongodb:/var/lib/mongodb' ports: - '37017:27017'Copy the code