Docker

preface

The increasingly complex operation and maintenance development environment requires more diversified virtual servers and application services. We need more scalable, superior performance, convenient monitoring management services, container applications, container operation and maintenance came into being.

The environment you need to prepare (choose one of the three) :

  • Linux (Centos 7 or later, Debian 8 or later, Ubuntu 16.04LTS or later)
  • Windows 64-bit Pro/Enterprise/Education (Build 15063 +)
  • MacOS Sierra 10.12 and above

If you have the above environment, you can learn very quickly.

【 Harvest 】 :

  1. Understand/install Docker container technology
  2. The mysql, Nginx, and Tomcat services are rapidly deployed in seconds
  3. Set up your own Git server
  4. Use container technology to publish nodeJS + mongodb + KOA + VUE application

concept

What is containerization?

Containerization is a software development approach that packages an application or service, its dependencies, and its configuration (abstractly as a deployment manifest file) together as a container image.

A software container acts as a standard unit of software deployment and can contain different code and dependencies. By containerizing software in this way, developers and IT professionals can deploy IT to different environments with little or no modification.

Containerized applications run on container hosts, which in turn run on the OS (Linux or Windows). As a result, the footprint of a container is much smaller than that of a virtual machine (VM) image.

[Characteristics of containerization] :

  • Consistent runtime environment
  • scalability
  • Easier to transplant
  • Isolation,

2. Understand the Docker

Docker is an application container engine developed with GO language and application deployment technology based on container and sandbox mechanism. Docker is suitable for automated testing, packaging, continuous integration and application release, etc. Cloud computing service providers including Aliyun and Amazon have adopted Docker to build Serverless service platform. It can not only deploy projects, but also build databases, nginx services, nodeJS, PHP and other programming language environments.

PS: Docker has now been renamed Moby

Three important concepts in Docker:

Mirror image (image) :

Sharded (read only) file system, created by Dockerfile, which is independent, extensible, and more efficient

The container (the container) :

Created and managed by the Docker process: file system + system resources + network configuration + log management Docker is the running environment of Docker image, so the concept of container is easy to understand

Warehouse (registry) :

It is used to remotely store Docker image version control, change management, and facilitate continuous integration and rapid deployment

The relationship between containers and images is similar to that between objects and classes in object-oriented programming.

Docker uses a client-server (C/S) architectural pattern that uses remote apis to manage and create Docker containers.

Docker containers are created using the Docker image.

Docker vs Virtual machines

【 Difference 】 :

  1. A container is an application-layer abstraction that wraps code and dependencies together. Multiple containers can run on the same machine and share the operating system kernel with other containers, each running as a separate process in user space. Containers take up less space than VMS (container images are typically tens of megabytes in size), can handle more applications, and require fewer VMS and operating systems.
  2. A virtual machine (VM) is an abstraction of physical hardware that transforms one server into multiple servers. The hypervisor allows multiple VMS to run on a single machine. Each VM contains a complete copy of the operating system, applications, necessary binaries, and libraries – occupying tens of gigabytes. The virtual machine may also start slowly.

[Conclusion] :

features The container The virtual machine
Start the Second level Minutes of class
The hard disk to use As a general rule, beMB As a general rule, beGB
performance Close to the native Weaker than
System support A single machine supports thousands of containers Usually dozens
Development/environment customization Convenience (command line, object-oriented) Log in to the VM.

[Similarities] :

  1. File Quarantine/File Sharing (sandbox)
  2. Resource isolation
  3. Network isolation
  4. Support for multiple hosting environments (extended)
  5. Snapshot/Mirror (Version control/Change Management)

【 Differences 】:

  1. Different resource management/dependency/release (VMS occupy more system resources)
  2. Different application environments
  3. Docker is copy on write
  4. Different logging methods (Docker collects logs, while virtual machines need to see logs in virtual systems)
  5. Different interaction modes (Docker more shell, virtual machine more GUI)

4. How Docker works (emphasis)

Docker is a container-based deployment technology. Its main function is to realize application deployment by running containers, and containers run based on images.

To put it simply, you print your project and dependencies (base images) into a project image with startup instructions, then create a container on the server and run the image inside the container to implement the project deployment.

The server is the host machine of the container, and docker container and host machine are isolated from each other.

Docker is based on technologies such as LXC: Linux Containers.

General process:

Docker process:

What’s going on?

  1. Docker will pull the image by itself. If the image already exists locally, it is not necessary to pull the image from the Internet
  2. Creating a new container
  3. The file system is allocated and has a read-write layer attached to it. Any changes to the container are recorded on this read-write layer. You can save the changes as a new mirror or not, and the next time you run the mirror, all changes will be eliminated
  4. Allocate a network bridge interface, creating a network interface that allows the container to communicate with the local host
  5. Set the IP address and find an available IP address from the pool to attach to the container. In other words, localhost does not have access to the container
  6. Run the program you specify
  7. Capture and provide application output, including input, output, and error messages

Docker’s value:

From the perspective of application architecture: unify the complex build environment;

From an application deployment perspective: solve problems with different dependencies and cumbersome builds, combined with automated tools (such as Jenkins) to improve efficiency.

From the perspective of cluster management: standardized service scheduling, service discovery, load balancing

application

1. Docker installation and configuration (lazy)

  1. Installation tutorial address:

    www.runoob.com/docker/maco…

  2. Configure the address of mirror acceleration:

    www.runoob.com/docker/dock…

2. Docker commands

1. Check the Docker version

docker --version
Copy the code

2. Run your first Docker app

// Use the docker run command

docker run hell-world

// Download ubuntu image and print "from ububtu"
// -i: runs the container in interactive mode, usually in conjunction with -t
// -t: reassigns a pseudo input terminal to the container, usually used in conjunction with -i

docker run -it ubuntu echo "from ubuntu"  

Copy the code

3. View the running status of the container

Use the docker ps command to view the status of running containers, and the -a argument to view all running containers (whether stopped or not).

4. Important commands

Run creates a new container and runs a command

Exec can go inside the container

– It is an interactive terminal tool

-D keeps the mirror container running in the background

–name Specifies the name of the container

-p Indicates the port mapped from the container to the host

-v Mounts the file directory of the host to the image

-e Sets environment variables

// Run the image
docker run -it -d  --name test ubuntu
// Enter the container
docker exec -it test /bin/bash 
// Map your local Downloads file to the Ubuntu Home file
docker run -v ~/Downloads/:/home -itd --name test1 ubuntu

Copy the code

5. Container management

  • Start thestart
  • stopstop
  • restartrestart
  • Delete the stopped containerrm
docker stop test
Copy the code

3. Common application scenarios

Docker provides lightweight virtualization, allowing for the creation of a larger number of containers on the same machine than virtual machines.

Common application scenarios:

  1. Rapid deployment
  2. Isolation of application
  3. Improve development efficiency
  4. Version control

1. Rapid deployment

We try to deploy a mysql:

docker run -d --name mysql-test -e MYSQL_ROOT_PASSWORD=123456 mysql
Copy the code

In the same way, let’s deploy nginx:

Note: There is no mapped service port, so the contents of the index. HTML will not be visible from port 80. Need to add -p parameter to map ports!!

docker run -d --name web -p 8000:80 -v ${your_dir}:/usr/share/nginx/html nginx
Copy the code

2. Isolate applications

We can run two mysql, two nginx, and specify different ports for mapping:

Map mysql-test1 to port 8001 and mysql-test2 to port 8002.

docker run -d --name mysql-test1 -p 8001:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql
docker run -d --name mysql-test2 -p 8002:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql
Copy the code

Map Web1 to port 8100 and Web2 to port 8200.

docker run -d --name web1 -p 8100:80 -v ${your_dir}:/usr/share/nginx/html nginx
docker run -d --name web2 -p 8200:80 -v ${your_dir}:/usr/share/nginx/html nginx
Copy the code

3. Improve development efficiency

  1. Consistent runtime environment

    Since Docker ensures the consistency of the execution environment, application migration is easier. Docker can be run on many platforms, no matter physical machine, virtual machine, public cloud, private cloud, or even laptop, its running results are consistent. Therefore, users can easily migrate an application that is running on one platform to another platform without worrying about the failure of the application due to changes in the operating environment.

  2. Faster startup time

    Traditional virtual machine technology often takes several minutes to start application services, while Docker container applications, because they run directly on the host kernel, do not need to start the complete operating system, so it can achieve second level, even millisecond level startup time. Greatly saves development, testing, and deployment time.

  3. More efficient reuse of system resources

    Docker has a higher utilization rate of system resources because the container does not need to carry out hardware virtualization and run the complete operating system. Whether it’s application execution speed, memory consumption, or file storage speed, it’s more efficient than traditional virtual machine technology. As a result, a host with the same configuration can often run a larger number of applications than with virtual machine technology.

  4. Warehouse/mirroring mechanism

    Using warehouse can conveniently run Docker application on any virtual machine/server/host that has Docker process. The unification of environment makes their deployment very simple.

Version control

A Docker container can also act like a Git repository, allowing you to commit your changes to a Docker image and manage them across different versions. Take a look at this example:

We created a mysql, now we can take a snapshot and tag it with the commit command.

We’ll talk more about that later in the courseCommon commands for Docker

4. Create a Docker image

Dockerfile is a script composed of a bunch of commands and parameters. Using Docker build, you can execute the script to build the image, automatically do some things, mainly for continuous integration.

Generally, a Dockerfile consists of four parts:

  • Basic Mirror Information
  • Maintainer information
  • Mirror operation instruction
  • Instructions are executed when the container starts

1. Example of Dockerfile file:

FROM node:10

LABEL [email protected]

# Create app directory
WORKDIR /app

# Copy package.json, package-lock.json(npm@5+) or yarn.lock to the working directory (relative path)
COPY ["package.json"."*.lock".". /"]

# Package app source code
Special note: Specify the file name in the working directory
COPY src ./src

# Use.dockerignore to merge the above two copies into one
# COPY . .

# Use Yarn to install app dependencies
If you need to build production code, use:
# - prod parameters
RUN yarn --prod --registry=https://registry.npm.taobao.org

# Expose port -p 4000:3000
EXPOSE 3000

CMD [ "node"."src/index.js" ]
Copy the code

2. Generate node applications locally

When node.js meets Docker, here’s how Docker is used in the front end:

A simple Koa application:

const Koa = require('koa');
const app = new Koa();

// response
app.use(ctx= > {
  ctx.body = 'Hello Koa!! ';
});

app.listen(3000);
Copy the code

3. Package the image

Use docker build to package:

docker build -t ${your_name}/${image_name}:${tag} .
Copy the code

Where your_name represents the user name in the remote repository, or the repository address; Image_name is the image name and tag is the image tag used for version control. . Represents the current directory. For example:

Docker build-t tiedan/ nod-demo :1.0.Copy the code

4. Use the mirror

Run with docker run

/ / run
docker run -d --name nodedemo -p 4000:3000 tiedan/node-demo:1.0
/ / check
docker ps
Copy the code

5. Docker-compose

With Docker-Compose, users can easily define a multi-container app with a profile, and then use a single directive to install all of the app’s dependencies to complete the build. Docker-compose addresses the issue of how the orchestration is managed between containers.

Compose has two important concepts:

  • Service: An application container that can actually contain several instances of the container running the same image.
  • Project: a complete business unit composed of a set of associated application containers, defined in the docker-compose. Yml file.

Docker Compose is a standalone product of Docker, so it is necessary to install Docker Compose separately after installing Docker.

1. Install

Docker desktop and Docker Toolbox for Windows and Mac already include Compose and other Docker apps, For Linux, you can download the binary package for Compose from Github.

# downloadSudo curl -l https://github.com/docker/compose/releases/download/1.20.0/docker-compose- ` ` uname - s - ` uname -m ` -o/usr /local/bin/docker-compose
# installation
chmod +x /usr/local/bin/docker-compose
# View version
docker-compose --version
Copy the code

2. Use

Docker – compose. Yml rewrite:

version: '3'
services:
  mysql:
    image: mysql
    container_name: test-mysql
    ports:
    - "8001:3306"
    environment:
    - MYSQL_ROOT_PASSWORD=123456
Copy the code

In the current directory of this file, use docker-compose up -d to execute.

Life cycle Management:

Create: run/up

Start /stop/ delete /restart: start/stop/rm/restart

View/log: logs/ps

3. The application of

Set up local Mongo + Mongo-Express service

Docker-compose. Mongo.yml configuration file
version: '3.1'
services:
  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456

  mongo-express:
    image: mongo-express
    restart: always
    ports:
      - 8081: 8081
    environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: root
      ME_CONFIG_MONGODB_ADMINPASSWORD: 123456
Copy the code

Perform:

Docker-compose -f: docker-compose -f: docker-compose -f: docker-compose: mongo.yml
docker-compose -f docker-compose.mongo.yml up -d
Copy the code

4. Set up git server

Project address: github.com/sameersbn/d…

Docker – compose. Git. Yml file

// Run the file docker-compose -f docker-compose.git.yml up -d
version: '2.3'

services:
  redis:
    restart: always
    image: redis:5.09.
    command:
    - --loglevel warning
    volumes:
    - redis-data:/var/lib/redis:Z

  postgresql:
    restart: always
    image: sameersbn/postgresql:12-20200524
    volumes:
    - postgresql-data:/var/lib/postgresql:Z
    environment:
    - DB_USER=gitlab
    - DB_PASS=password
    - DB_NAME=gitlabhq_production
    - DB_EXTENSION=pg_trgm,btree_gist

  gitlab:
    restart: always
    image: sameersbn/gitlab:13.112.
    depends_on:
    - redis
    - postgresql
    ports:
    - "10080:80"
    - "10022:22"
    volumes:
    - gitlab-data:/home/git/data:Z
    healthcheck:
      test: ["CMD"."/usr/local/sbin/healthcheck"]
      interval: 5m
      timeout: 10s
      retries: 3
      start_period: 5m
    environment:
    - DEBUG=false

    - DB_ADAPTER=postgresql
    - DB_HOST=postgresql
    - DB_PORT=5432
    - DB_USER=gitlab
    - DB_PASS=password
    - DB_NAME=gitlabhq_production

    - REDIS_HOST=redis
    - REDIS_PORT=6379

    - TZ=Asia/Kolkata
    - GITLAB_TIMEZONE=Kolkata

    - GITLAB_HTTPS=false
    - SSL_SELF_SIGNED=false

    - GITLAB_HOST=localhost
    - GITLAB_PORT=10080
    - GITLAB_SSH_PORT=10022
    - GITLAB_RELATIVE_URL_ROOT=
    - GITLAB_SECRETS_DB_KEY_BASE=long-and-random-alphanumeric-string
    - GITLAB_SECRETS_SECRET_KEY_BASE=long-and-random-alphanumeric-string
    - GITLAB_SECRETS_OTP_KEY_BASE=long-and-random-alphanumeric-string

    - GITLAB_ROOT_PASSWORD=12345678
    - GITLAB_ROOT_EMAIL=atiedan666666@163.com

    - GITLAB_NOTIFY_ON_BROKEN_BUILDS=true
    - GITLAB_NOTIFY_PUSHER=false

    - [email protected]
    - [email protected]
    - [email protected]

    - GITLAB_BACKUP_SCHEDULE=daily
    - GITLAB_BACKUP_TIME=01:00

    - SMTP_ENABLED=false
    - SMTP_DOMAIN=www.example.com
    - SMTP_HOST=smtp.gmail.com
    - SMTP_PORT=587
    - [email protected]
    - SMTP_PASS=password
    - SMTP_STARTTLS=true
    - SMTP_AUTHENTICATION=login

    - IMAP_ENABLED=false
    - IMAP_HOST=imap.gmail.com
    - IMAP_PORT=993
    - [email protected]
    - IMAP_PASS=password
    - IMAP_SSL=true
    - IMAP_STARTTLS=false

    - OAUTH_ENABLED=false
    - OAUTH_AUTO_SIGN_IN_WITH_PROVIDER=
    - OAUTH_ALLOW_SSO=
    - OAUTH_BLOCK_AUTO_CREATED_USERS=true
    - OAUTH_AUTO_LINK_LDAP_USER=false
    - OAUTH_AUTO_LINK_SAML_USER=false
    - OAUTH_EXTERNAL_PROVIDERS=

    - OAUTH_CAS3_LABEL=cas3
    - OAUTH_CAS3_SERVER=
    - OAUTH_CAS3_DISABLE_SSL_VERIFICATION=false
    - OAUTH_CAS3_LOGIN_URL=/cas/login
    - OAUTH_CAS3_VALIDATE_URL=/cas/p3/serviceValidate
    - OAUTH_CAS3_LOGOUT_URL=/cas/logout

    - OAUTH_GOOGLE_API_KEY=
    - OAUTH_GOOGLE_APP_SECRET=
    - OAUTH_GOOGLE_RESTRICT_DOMAIN=

    - OAUTH_FACEBOOK_API_KEY=
    - OAUTH_FACEBOOK_APP_SECRET=

    - OAUTH_TWITTER_API_KEY=
    - OAUTH_TWITTER_APP_SECRET=

    - OAUTH_GITHUB_API_KEY=
    - OAUTH_GITHUB_APP_SECRET=
    - OAUTH_GITHUB_URL=
    - OAUTH_GITHUB_VERIFY_SSL=

    - OAUTH_GITLAB_API_KEY=
    - OAUTH_GITLAB_APP_SECRET=

    - OAUTH_BITBUCKET_API_KEY=
    - OAUTH_BITBUCKET_APP_SECRET=
    - OAUTH_BITBUCKET_URL=

    - OAUTH_SAML_ASSERTION_CONSUMER_SERVICE_URL=
    - OAUTH_SAML_IDP_CERT_FINGERPRINT=
    - OAUTH_SAML_IDP_SSO_TARGET_URL=
    - OAUTH_SAML_ISSUER=
    - OAUTH_SAML_LABEL="Our SAML Provider"
    - OAUTH_SAML_NAME_IDENTIFIER_FORMAT=urn:oasis:names:tc:SAML:2.0:nameid-format:transient - OAUTH_SAML_GROUPS_ATTRIBUTE= - OAUTH_SAML_EXTERNAL_GROUPS= - OAUTH_SAML_ATTRIBUTE_STATEMENTS_EMAIL= - OAUTH_SAML_ATTRIBUTE_STATEMENTS_NAME= - OAUTH_SAML_ATTRIBUTE_STATEMENTS_USERNAME= - OAUTH_SAML_ATTRIBUTE_STATEMENTS_FIRST_NAME= - OAUTH_SAML_ATTRIBUTE_STATEMENTS_LAST_NAME= - OAUTH_CROWD_SERVER_URL= - OAUTH_CROWD_APP_NAME= - OAUTH_CROWD_APP_PASSWORD=  - OAUTH_AUTH0_CLIENT_ID= - OAUTH_AUTH0_CLIENT_SECRET= - OAUTH_AUTH0_DOMAIN= - OAUTH_AUTH0_SCOPE= - OAUTH_AZURE_API_KEY=  - OAUTH_AZURE_API_SECRET= - OAUTH_AZURE_TENANT_ID= volumes: redis-data: postgresql-data: gitlab-data:Copy the code

5. Actual cases

Docker-compose is used in the front end:

Nodejs + mongodb + KOA + VUE

docker-compose.yml

version: '3'
services:
  web:
    image: 1.0 the web:
    ports:
    - "8080:80"

  server:
    image: Server: 1.0
    ports:
    - "3000:3000"
    depends_on:
    - mongodb
    links:
    - mongodb:db

  mongodb:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
Copy the code

Depends_on determines the order in which the mongdb and web containers are loaded. After mongdb is created, the server is created.