Docker advanced

Docker Compose

Introduction to the

Docker Compose is one of Docker’s official Orchestration projects and is responsible for rapidly deploying distributed applications in clusters. The project is written in Python and actually calls the API provided by Docker to implement it.

Dockerfile allows users to manage a separate application container; Compose allows users to define a set of associated application containers (called a project) in a template (YAML format), such as a Web service container plus a back-end database service container.

Define running multiple containers to manage containers easily and efficiently.

  • Define and run multiple containers
  • YAML file Configuration file
  • A single instruction
  • Compose is available for all environments
  • Three steps
    1. Dockerfile
    2. docker-compose.yml
    3. Start the project docker-compose Up

The official instructions

Compose is a tool for defining and running multi-container Docker applications. With Compose, you can use the YAML file to configure your application’s services. Then, use a command to create and start all services from the configuration.

Compose is suitable for all environments: production, staging, development, testing, and CI workflows.

Using Compose basically consists of three steps:

  1. Use “Dockerfile” to define the application’s environment so that it can be copied anywhere.
  2. Define the services that make up the application in “Docker comemage.yml” so that they can run together in an isolated environment.
  3. Run “Docker compose Up” and get the entire application up and running.

Why not run multiple programs in a single container?

  1. Transparent. Infrastructure and services, such as process management and resource monitoring, to keep containers in a container group consistent. This is designed for user convenience.
  2. Decouple dependencies between software. Each container can be rebuilt and published.
  3. Easy to use. Users do not have to run separate application management, nor do they have to worry about the exit status of each application.
  4. Efficient. Given that infrastructure has more responsibilities, containers must be lightweight.

Purpose: Batch container orchestration

Compose is an official Docker open source project that requires installation.

Docker Desktop comes with Compose.

The docker-comemage. yml file is structured as follows

version: "3.9"  # optional since v1.27.0
services:
  web:
    build: .
    ports:
      - "5000:5000"
    volumes:
      - .:/code
      - logvolume01:/var/log
    links:
      - redis
  redis:
    image: redis
volumes:
  logvolume01: {}
Copy the code

Docker-compose Up is a one-click startup for 100 services

Compose important concept:

  • Service services. An application container can actually run multiple instances of the same image. The container. Applications (Web, Redis, mysql…)
  • The project of the project. A complete business unit consisting of a set of associated application containers. For example: blogs (Web, mysql, WP…)

The installation

Docs.docker.com/compose/ins…

experience

Docs.docker.com/compose/get…

  1. Step 1: Create app app.py

    mkdir composetest
    cd composetest
    vim app.py
    Copy the code

    app.py

    import time
    
    import redis
    from flask import Flask
    
    app = Flask(__name__)
    cache = redis.Redis(host='redis', port=6379)
    
    def get_hit_count() :
        retries = 5
        while True:
            try:
                return cache.incr('hits')
            except redis.exceptions.ConnectionError as exc:
                if retries == 0:
                    raise exc
                retries -= 1
                time.sleep(0.5)
    
    @app.route('/')
    def hello() :
        count = get_hit_count()
        return 'Hello World! I have been seen {} times.\n'.format(count)
    Copy the code
    vim requirements.txt
    Copy the code

    requirements.txt

    flask
    redis
    Copy the code
  2. Create a Dockerfile and package the application as an image

    vim Dockerfile
    Copy the code

    Dockerfile

    FROM python:3.7-alpine
    WORKDIR /code
    ENV FLASK_APP=app.py
    ENV FLASK_RUN_HOST=0.0.0.0
    RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories
    RUN apk add --no-cache gcc musl-dev linux-headers
    COPY requirements.txt requirements.txt
    RUN pip install -r requirements.txt
    EXPOSE 5000
    COPY.
    CMD ["flask"."run"]
    Copy the code

    The official explanation

    This tells Docker to:

    • Build an image starting with the Python 3.7 image.
    • Set the working directory to /code.
    • Set environment variables used by the flask command.
    • Install gcc and other dependencies
    • Copy requirements.txt and install the Python dependencies.
    • Add metadata to the image to describe that the container is listening on port 5000
    • Copy the current directory . in the project to the workdir . in the image.
    • Set the default command for the container to flask run.
  3. Step 3, create the docker-comemage. yml file (define the entire service, the required environment). Web, redis)

    vim docker-compose.yml
    Copy the code

    docker-compose.yml

    version: "3.9"
    services:
      web:
        build: .
        ports:
          - "5000:5000"
      redis:
        image: "redis:alpine"
    Copy the code
  4. 4. Start the Compose project (Docker-compose Up)

    docker-compose up
    Copy the code

process

  1. Create a network (all the content in the project is under the same network. Domain name access)

  2. Perform docker – compose. Yml

  3. Start the service

    docker-compose.yml

    Creating composetest_web_1 … done

    Creating composetest_redis_1 … done

    1. The file name conposetest
    2. The web service
    3. Number of copies 1

stop

docker-compose down
docker-compose stop
Copy the code

Yaml rules

Docs.docker.com/compose/com…

# 3 layer

version:'' # version
service: # service
	1: service web
		# Service configuration
		images
		build
		network
		depends_on Dependencies start dependencies first
			-redis
		.
	Service 2: redis
	Service 3...
Configure other network/volume and global rules
volumes:
networks:
configs:
Copy the code

The open source project

WordPress blog

Docs.docker.com/samples/wor…

version: "3.9"
    
services:
  db:
  	platform: linux/x86_64
    image: Mysql: 5.7
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
    
  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    ports:
      - "8000:80"
    restart: always
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
volumes:
  db_data: {}
Copy the code

Run docker-compose up/docker-compose up -d under this yamL file and go to localhost:8000 to see the wordpress operation page.

In actual combat

  1. Write the project microservice and package the JAR
  2. Dockerfile build image
  3. Docker-comemess. yml orchestrates projects
  4. The three files are placed in the same directory on the server for executiondocker-compose up

Docker Swarm

  1. Prepare 4 2G servers with 1 core for cluster construction, 1 master and 3 slave!
  2. The server image uses centOS7 or later

Docker is installed on four servers

  1. Yum Install the GCC environment

    yum -y install gcc
    yum -y install gcc-c++
    Copy the code
  2. Uninstall the previous version

    yum remove docker \
    									docker-client \
    									docker-client-latest \
    									docker-common \
    									docker-latest \
    									docker-latest-logrotate \
    									docker-logrotate \
    									docker-engine
    Copy the code
  3. Install required software packages

    yum install -y yum-utils
    Copy the code
  4. Setting up the Mirror Warehouse

    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    Copy the code
  5. Update yum Package index

    yum makecache fast
    Copy the code
  6. Install the Docker CE

    yum install -y docker-ce docker-ce-cli containerd.io
    Copy the code
  7. Start the Docker

    systemctl start docker
    Copy the code
  8. The test command

    docker version
    Copy the code
  9. Configuring Mirror Acceleration

    sudo mkdir -p /etc/docker
    
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
    	"registry-mirrors":["https://qiyb9988.mirror.aliyuncs.com"]
    }
    EOF
    
    sudo systemctl daemon-reload
    
    sudo systemctl restart docker
    Copy the code

The official documentation

Docs.docker.com/engine/swar…

Raft protocol (Consistency Protocol)

Dual master/slave: If one Manager node dies, the other becomes unavailable.

Raft protocol: Ensures that most nodes are available only when they survive. High availability. Common nodes need to be larger than one node, and clusters need to be larger than three nodes.

Set up the cluster

  1. Initialize a Swarm on the first server

    Docker swarm init --advertise-addr Indicates the current Intranet IP addressCopy the code

    Docker Swarm Init if Docker Desktop for Mac or Docker Desktop for Windows is used to test single-node swarm, just run Docker Swarm Init without any parameters

Run Docker Info to see the current status of the cluster:

$ docker infoContainers: 2 Running: 0 Paused: 0 Stopped: 2 ... snip... Swarm: active NodeID: dxn1zf6l61qsb1josjja83ngz Is Manager: true Managers: 1 Nodes: 1 ... snip...Copy the code

Run the docker node ls command to view information about the node:

$ docker node ls

ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
dxn1zf6l61qsb1josjja83ngz *  manager1  Ready   Active        Leader
Copy the code

The * next to the node ID indicates that you are currently connected to the node.

Docker Engine Swarm mode automatically names nodes for machine host names.

  1. Join node

    Once you have created a swarm with a manager node, you can add the worker node.

    #Add a node
    docker swarm join
    #The access token
    docker swarm join-token manager
    docker swarm join-token worker
    Copy the code
    $docker swarm join \ --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \ 192.168.99.100:2377
    
    This node joined a swarm as a worker.
    Copy the code
    $ docker swarm join-token worker
    
    To add a worker to this swarm, run the following command:
    
        docker swarm join \
        --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
        192.168.99.100:2377
    Copy the code

    Open a terminal, SSH to the machine where the manager node is running, and run the docker node ls command to view the working node:

    ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
    03g1y59jwfg7cf99w4lt0f662    worker2   Ready   Active
    9j68exjopxe7wfl6yuxml7a7j    worker1   Ready   Active
    dxn1zf6l61qsb1josjja83ngz *  manager1  Ready   Active        Leader
    Copy the code

    The MANAGER column identifies the MANAGER nodes in the group. The empty state in this column of worker1 and worker2 identifies them as working nodes.

    Group management commands such as Docker node ls only work on manager nodes.

Deploy the service

Swarm deployed the service to swarm

Once the cluster is created, services can be deployed to the cluster. Run the following command:

$ docker service create --replicas 1 --name helloworld alpine ping docker.com

9uk4639qpg7npwf3fn2aasksr
Copy the code
  • docker service createCommand to create a service.
  • --nameName the service HelloWorld.
  • --replicasSpecifies the desired state of 1 running instance.
  • parameteralpine ping docker.comDefine the service as the Alpine Linux container that executes the ping docker.com command.
The Docker run container starts! Docker Service does not have expanded container service! With expansion container, rolling updateCopy the code

Run docker service ls to see the list of running services:

$ docker service ls

ID            NAME        SCALE  IMAGE   COMMAND
9uk4639qpg7n  helloworld  1/1    alpine  ping docker.com
Copy the code

Check out the service on Swarm

Run docker service inspect –pretty < service-id > to display details about the service in an easy-to-read format. View details of the HelloWorld service:

[manager1]$ docker service inspect --pretty helloworld

ID:		9uk4639qpg7npwf3fn2aasksr
Name:		helloworld
Service Mode:	REPLICATED
 Replicas:		1
Placement:
UpdateConfig:
 Parallelism:	1
ContainerSpec:
 Image:		alpine
 Args:	ping docker.com
Resources:
Endpoint Mode:  vip
Copy the code

To return the service details in JSON format, run the same command without the –pretty flag.

[manager1]$ docker service inspect helloworld
[
{
    "ID": "9uk4639qpg7npwf3fn2aasksr",
    "Version": {
        "Index": 418
    },
    "CreatedAt": "2016-06-16T21:57:11.622222327Z",
    "UpdatedAt": "2016-06-16T21:57:11.622222327Z",
    "Spec": {
        "Name": "helloworld",
        "TaskTemplate": {
            "ContainerSpec": {
                "Image": "alpine",
                "Args": [
                    "ping",
                    "docker.com"
                ]
            },
            "Resources": {
                "Limits": {},
                "Reservations": {}
            },
            "RestartPolicy": {
                "Condition": "any",
                "MaxAttempts": 0
            },
            "Placement": {}
        },
        "Mode": {
            "Replicated": {
                "Replicas": 1
            }
        },
        "UpdateConfig": {
            "Parallelism": 1
        },
        "EndpointSpec": {
            "Mode": "vip"
        }
    },
    "Endpoint": {
        "Spec": {}
    }
}
]
Copy the code

Docker service ps < service-id > docker service ps < service-id >

[manager1]$ docker service ps helloworld

NAME             IMAGE   NODE     DESIRED STATE  CURRENT STATE     ERROR PORTS
helloworld.1...  alpine  worker2  Running        Running 3 minutes
Copy the code

In this case, an instance of the HelloWorld service is running on the Worker2 node. You may also see services running on the Manager node. By default, the management nodes in the cluster can perform tasks as the worker nodes.

Swarm also displays the required state and current state of the service task to see if the task is running according to the service definition.

Run Docker PS on the node where the task is running to see details about the task container.

If HelloWorld is running on a node other than the manager node, you must connect SSH to that node.

[worker2]$ docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
e609dde94e47        alpine:latest       "ping docker.com"   3 minutes ago       Up 3 minutes     
Copy the code

Dynamic expansion container

docker service update --replicas 3 helloworld
Copy the code

Create three helloWorld replicas and deploy them randomly to the servers in the cluster, no matter which nodes are accessed.

docker service update --replicas 1 helloworld
Copy the code

Reduce the helloWorld replica in the cluster to 1. Services can have multiple copies of dynamic scaling to achieve high availability!

docker service scale helloworld=5
docker service scale helloworld=2
Copy the code

The scale command has the same effect as update –replicas

Docker service ps < service-id >

$Docker service ps helloworldNAME IMAGE NODE DESIRED STATE CURRENT STATEhelloworld. 1.8 p1vev3fq5zm0mi8g0as41w35 alpine worker2 Running Running 7 minuteshelloworld.2.c7a7tcdq5s0uk3qr88mf8xco6 alpine worker1 Running Running 24 Secondshelloworld. 3.6 crl09vdcalvtfehfh69ogfb1 alpine worker1 Running Running 24 secondshelloworld.4.auky6trawmdlcne8ad8phb0f1 alpine manager1 Running Running 24 secondshelloworld.5.ba19kca06l18zujfwxyc5lkyn alpine worker2 Running Running 24 seconds
Copy the code

Swarm created four new tasks to scale up to a total of five instances running Alpine Linux. The tasks are distributed among the three nodes of the cluster. One runs on Manager1.

Run Docker PS to see the container running on the connected node. The following example shows tasks running on Manager1:

$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 528d68040f95 alpine:latest "ping docker.com" About a minute ago Up  About a minute helloworld.4.auky6trawmdlcne8ad8phb0f1Copy the code

Remove the service

docker service rm helloworld
Copy the code

Run docker service inspect< service-id > to verify whether Swarm Manager deleted the service. The CLI returns a message indicating that the service could not be found:

$ docker service inspect helloworld
[]
Error: no such service: helloworld
Copy the code

Even if the service no longer exists, the task container takes a few seconds to clean up. Docker PS can be used on the node to verify when the task is deleted.

$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES db1651f50347 alpine:latest "ping docker.com" 44 minutes ago Up 46 43 seconds helloworld. 5.9 lkmos2beppihw95vdwxy1j3w bf6e532a92 alpine: latest "ping docker.com" 44 minutes line Up 46 seconds  helloworld.3.a71i8rp6fua79ad43ycocl4t2 5a0fb65d8fa7 alpine:latest "ping docker.com" 44 minutes ago Up 45 seconds The helloworld. 2.2 jpgensh7d935qdc857pxulfr afb0ba67076f alpine: the latest "ping docker.com" 44 minutes line Up 46 seconds The helloworld. 4.1 c47o7tluz7drve4vkm2m5olx 688172 d3bfaa alpine: latest 45 minutes "ping docker.com" line Up the About a minute The helloworld. 1.74 nbhb3fhud8jfrhigd7s29we
$ docker ps
   CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS  
Copy the code

The service applies rolling updates

Deploy a service based on the Redis3.0.6 container tag. The service is then upgraded to use redis3.0.7 container images using rolling updates.

Deploy the Redis tag to Swarm and configure Swarm with a 10-second update delay. The following example shows an old Redis tag:

$Docker service create \ --replicas 3 \ --name redis \ --update-delay 10s \ redis:3.0.6

0u6a4s31ybk7yw2wyvtikmu50
Copy the code

Configure the rolling update policy at service deployment time.

— Update delay Sets the delay between service tasks or task set updates. Time T can be described as a combination of seconds Ts, minutes Tm, or hours Th. So 10m30s is a delay of 10 minutes and 30 seconds.

By default, the scheduler updates one task at a time. You can pass the — UPDATE Parallelism flag to configure the maximum number of service tasks that the scheduler can update at the same time.

By default, when updates to a single task return to the running state, the scheduler schedules another task to update until all tasks are updated. If the task returns a failure at any time during the update process, the scheduler suspends the update. You can use the docker service Create or docker Service Update — Update Failure Action flag to control behavior.

Check the Redis service:

$ docker service inspect --pretty redisID: 0u6a4s31ybk7yw2wyvtikmu50 Name: redis Service Mode: Replicated Replicas: 3 Placement: Strategy: Spread UpdateConfig: Parallelism: 1 Delay: 10s ContainerSpec: Image: redis:3.0.6 Resources: Endpoint Mode: VIPCopy the code

You can now update the container image for Redis. Swarm Manager applies updates to nodes according to the UpdateConfig policy:

$Docker service Update --image redis:3.0.7 redis
redis
Copy the code

By default, the scheduler applies rolling updates, as shown below:

  • Stop the first task.
  • Schedule updates for stopped tasks.
  • Start the container for the update task.
  • If the update to the task returns RUNNING, wait for the specified delay before starting the next task.
  • If at any point during the update, the task returns a failure, pause the update.

rundocker service inspect--pretty redisTo view the new image in the running state:

$ docker service inspect --pretty redisID: 0u6a4s31ybk7yw2wyvtikmu50 Name: redis Service Mode: Replicated Replicas: 3 Placement: Strategy: Spread UpdateConfig: Parallelism: 1 Delay: 10s ContainerSpec: Image: redis:3.0.7 Resources: Endpoint Mode: VIPCopy the code

service inspectThe output shows whether the update was suspended due to failure:

$ docker service inspect --pretty redisID: 0u6a4s31ybk7yw2wyvtikmu50 Name: redis ... snip... Update status: State: paused Started: 11 seconds ago Message: update paused due to failure or early termination of task 9p7ith557h8ndf0ui9s0q951b ... snip...Copy the code

To restart paused updates, rundocker service update<service-ID>. Such as:

docker service update redis
Copy the code

It may also be necessary to pass flags to the Docker Service Update to reconfigure the service to avoid repeating certain update failures.

rundocker service ps<service-ID>To view rolling updates:

$ docker service ps redisNAME the IMAGE NODE DESIRED STATE CURRENT STATE ERROR redis. 1. Dos1zffgeofhagnve8w864fco redis: 3.0.7 worker1 Running Running 37 seconds \ _ redis 1.88 rdo6pa52ki8oqx6dogf04fh redis: 3.0.6 worker2 Shutdown Shutdown 56 seconds a line Redis 2.9 l3i4j85517skba5o7tn5m8g0 redis: 3.0.7 worker2 Running Running About a minute \ _ Redis 2.66 k185wilg8ele7ntu8f6nj6i redis: 3.0.6 worker1 Shutdown Shutdown for 2 minutes line redis. 3. Egiuiqpzrdbxks3wxgn8qib1g Redis: 3.0.7 worker1 Running Running 48 seconds \ _ redis. 3. Ctzktfddb2tepkr45qcmqln04 redis: 3.0.6 mmanager1 Shutdown Shutdown 2 minutes agoCopy the code

Before Swarm updates all tasks, you can see that some tasks are running Redis :3.0.6, while other services are running Redis :3.0.7, and the output above shows the status after the rolling update is complete.

Maintain nodes on the cluster

In the previous section, all nodes were in a running, available state. Swarm Manager can assign tasks to any available node.

Sometimes, when you need to perform maintenance on a server, you need to configure a node to be in the drain state, which drains all running containers from the node. The drain state prevents the maintenance node from receiving instructions from the management node.

It also means that the management node stops running tasks on the server and puts the replication tasks on other available nodes.

Important: Setting a node to DRAIN does not remove standalone containers from the node, such as those created using Docker Run, Docker Compose Up, or the Docker Engine API. Node states (including DRAIN) only affect the node’s ability to schedule swarm service workloads.

  1. Verify that all nodes are active and available.

    $ docker node ls
    
    ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
    1bcef6utixb0l0ca7gxuivsj0    worker2   Ready   Active
    38ciaotwjuritcdtn9npbnkuz    worker1   Ready   Active
    e216jshn25ckzbvmwlnh5jr3g *  manager1  Ready   Active        Leader
    Copy the code
  2. Swarm Manager assigns tasks to different nodes by running docker service ps redis:

    $ docker service ps redisNAME the IMAGE NODE DESIRED STATE CURRENT STATE redis 1.7 q92v0nr1hcgts2amcjyqg3pq redis: 3.0.6 manager1 Running Running 26 Seconds redis 2.7 h2l8h3q3wqy5f66hlv9ddmi6 redis: 3.0.6 worker1 Running Running 26 seconds Redis 3.9 bg7cezvedmkgg6c8yzvbhwsd redis: 3.0.6 worker2 Running Running 26 secondsCopy the code
  3. Docker node update– Availability drain< node-id >

    docker node update --availability drain worker1
    
    worker1
    Copy the code
  4. Check the node to check its availability:

    $ docker node inspect --pretty worker1ID: 38ciaotwjuritcdtn9npbnkuz Hostname: worker1 Status: State: Ready Availability: Drain ... snip...Copy the code
  5. Docker Manager docker Manager docker Manager

    $ docker service ps redisNAME the IMAGE NODE DESIRED STATE CURRENT STATE ERROR redis. 1.7 q92v0nr1hcgts2amcjyqg3pq redis: 3.0.6 manager1 Running Running 4 minutes redis. 2. B4hovzed7id8irg1to42egue8 redis: 3.0.6 worker2 Running Running About a minute \ _ Redis 2.7 h2l8h3q3wqy5f66hlv9ddmi6 redis: 3.0.6 worker1 Shutdown Shutdown for 2 minutes line redis. 3.9 bg7cezvedmkgg6c8yzvbhwsd Redis :3.0.6 worker2 Running Running 4 minutesCopy the code

    Swarm Manager maintains the desired state by ending tasks on nodes with exhausted availability and creating new tasks on nodes with active availability.

  6. Run docker node update– Availability Active < node-id > to return the depleted node to the active state:

    $ docker node update --availability active worker1worker1
    Copy the code
  7. Check the node to see the updated status:

    $docker node inspect --pretty worker1ID: 38ciaotwjuritcdtn9npbnkuzHostname: worker1Status: State: Ready Availability: Active... snip...
    Copy the code

    When a node is set back to active availability, it can receive new tasks:

    • This is extended during service updates
    • During rolling updates
    • When you set another node to exhaust availability
    • When a task fails on another active node

Swarm mode is used to route the grid

The Docker Engine swarm pattern makes it easy to publish ports for services, making them available to resources other than Swarm. All nodes participate in an entry routing network. ** Routing grid allows each node in a group to accept connections on published ports of any service running in the group, even if no tasks are running on the node. ** The routing grid routes all incoming requests to published ports on the available nodes and to the active container.

Swarm to use swarm, you need to open the following ports between swarm nodes before you can enable swarm mode:

  • Port 7946TCP/UDP Port used for the container network
  • Port 4789UDP Port for the container inlet network

In addition to this, you must also open the published ports between the Swarm nodes and any external resources (such as external load balancers) that need access to the ports.

It is also possible to bypass the routing grid for a given service.

Port for publishing services

When creating the service, use the –publish flag to publish the port. Target is used to specify ports within the container and published is used to specify ports to bind on the routing grid. If a published port is not used, a random high-numbered port is bound for each service task. You need to check the task first to determine the port.

$docker service create \ --name 
      
        \ --publish published=
       
        ,target=
        
          \ 
         
        
       
      
Copy the code

Note: The older form of this syntax is a colon-separated string where the published port is the first and the destination port is the second, for example -p 8080:80. The new syntax is preferred because it is easier to read and allows for greater flexibility.

is the PORT on which Swarm provides services. If ignored, a random high-value port is bound. < container-port > is the PORT on which the CONTAINER listens. This parameter is required.

For example, the following command issues port 80 from the nginx container to port 8080 on any node in swarm:

$docker service create \ --name my-web \ --publish published=8080,target=80 \ --replicas 2 \ nginx
Copy the code

When accessing port 8080 on any node, Docker routes requests to the active container. On the Swarm node itself, port 8080 May not actually be bound, but the routing grid knows how to route traffic and prevent any port collisions.

The routing grid listens for any IP addresses assigned to nodes on published ports. For externally routable IP addresses, the port can be obtained from outside the host. For all other IP addresses, they can only be accessed from within the host.

You can publish ports for existing services using the following command:

$docker service update \ --publish-add published=
      
       ,target=
       
         \ 
        
       
      
Copy the code

You can use Docker Service Inspect to view the published ports of the service. Such as:

$ docker service inspect --format="{{json .Endpoint.Spec.Ports}}" my-web

[{"Protocol":"tcp","TargetPort":80,"PublishedPort":8080}]
Copy the code

The output shows the < container-port > (marked TargetPort) from the CONTAINER and the < publics-port > (marked PublishedPort) of the node listening to the service request.

This port is only a TCP or UDP port

By default, when a port is published, it is a TCP port. You can publish UDP ports exclusively instead of TCP ports, or you can publish outside of TCP ports. When a TCP or UDP port is advertised, if the protocol specifier is omitted, the port is advertised as a TCP port. If you use a long syntax (recommended), set the protocol key to TCP or UDP.

Only for TCP

Long grammar:

$docker service create --name dns-cache \ --publish published=53,target=53 \ dns-cache
Copy the code

Phrase:

$docker service create --name dns-cache \ -p 53:53 \ dns-cache
Copy the code

TCP and UDP

Long grammar:

$docker service create --name dns-cache \ --publish published=53,target=53 \ --publish published=53,target=53,protocol=udp \ dns-cache
Copy the code

Phrase:

$docker service create --name dns-cache \ -p 53:53 \ -p 53:53/udp \ dns-cache
Copy the code

Only for the UDP

Long grammar:

$docker service create --name dns-cache \ --publish published=53,target=53,protocol=udp \ dns-cache
Copy the code

Phrase:

$docker service create --name dns-cache \ -p 53:53/udp \ dns-cache
Copy the code

Bypassing the routing grid

Routing grids can be bypassed so that when accessing a bound port on a given node, the service instance running on that node is always accessed. This is called the host mode. A few things to keep in mind.

  • If the node accessed is not running a service task, the service does not listen on this port. It could be that nothing is listening, or that a completely different application is listening.
  • If you want to run multiple service tasks on each node (for example, when there are 5 nodes but 10 replicas running), you cannot specify static target ports. Allows Docker to assign a random high-numbered port (not using published ports), either by using global services instead of replicated services, or by using placement constraints to ensure that only one service instance is running on a given node.

To bypass the routing grid, you must use the long syntax –publish service and set mode to host. If the mode key is omitted or set to ingress, the routing grid is used. The following command uses host mode to create a global service and bypass the routing grid.

$docker service create --name dns-cache \ --publish published=53,target=53,protocol=udp,mode=host \ --mode global \ dns-cache
Copy the code

Configure an external load balancer

Swarm services can be configured with external load balancers that can be used in conjunction with or without a routing grid at all.

  • Using a routing grid

    External load balancers can be configured to route requests to the Swarm service. For example, HAProxy can be configured to balance requests to nginx services published to port 8080.

In this case, port 8080 must be opened between the load balancer and the nodes in the cluster. Swarm nodes can reside on private networks accessible to proxy servers but not publicly accessible.

You can configure the load balancer to balance requests between each node in the cluster, even if there are no scheduled tasks on the node. For example, you can have the following HAProxy configuration in /etc/haproxy.cfg:

global log /dev/log local0 log /dev/log local1 notice ... snip...
# Configure HAProxy to listen on port 80frontend http_front bind *:80 stats uri /haproxy? stats default_backend http_back
# Configure HAProxy to route requests to swarm nodes on port 8080Backend http_back balance roundrobin server node1 192.168.99.100:8080 Check Server Node2 192.168.99.101:8080 Check Server node3 192.168.99.102:8080 checkCopy the code

When accessing the HAProxy load balancer on port 80, it forwards requests to nodes in Swarm. Swarm routing grid routes requests to active tasks. If the Swarm scheduler dispatches tasks to different nodes for any reason, there is no need to reconfigure the load balancer.

You can configure any type of load balancer to route requests to swarm nodes. To learn more about HAProxy, see HAProxy Documentation.

No routing grid

To use an external load balancer without a routing grid, set –endpoint mode to DNSRR, not the default for VIP. In this case, there is no single virtual IP. Instead, Docker sets up DNS entries for the service so that A DNS query on the service name returns a list of IP addresses, and the client connects directly to one of them. You are responsible for providing the load balancer with a list of IP addresses and ports. See Configure Service Discovery.

Concept to summarize

  • swarm

    Cluster management and numbering. Docker can initialize a swarm cluster that other nodes can join. (Managers, workers)

  • Node

    It’s a Docker node. Multiple nodes form a network cluster. (Managers, workers)

  • Service

    Tasks can be run on either the management node or the work node. Core, the service that the user accesses.

  • Task

    Container commands, detailed tasks.

Docker Stack

Docker Stack learning notes refer to the address: www.cnblogs.com/hhhhuanzi/p…

Introduction to the

Docker Stack is designed to solve multi-service deployment and management in large-scale scenarios, providing expected state, rolling upgrade, ease of use, scaling, health check and other features, and are encapsulated in a declarative model.

  • Docker Stack deployment application lifecycle:Initial Deployment > Health Check > Capacity Expansion > Update > Rollback.
  • Deployment can be accomplished using a single declarative file, that is, onlydocker-stack.ymlFile, usingdocker stack deployCommand to complete the deployment.
  • The stack file is essentially the Docker compose file, with the only requirement that version be 3.0 or higher.
  • Stack is fully integrated into Docker, unlike Compose, which has to be installed separately.

Docker is suitable for development and testing, while Docker Stack is suitable for large-scale scenarios and production environments

Docker-stack.yml file details

Pull the sample code from GitHub and analyze the docker-stack.yml file

git clone https://github.com/dockersamples/atsea-sample-shop-app.git
Copy the code

You can see that there are five services, three networks, four secret keys, and three port mappings;

services:
  reverse_proxy:
  database:
  appserver:
  visualizer:
  payment_gateway:
networks:
  front-tier:
  back-tier:
  payment:
secrets:
  postgres_password:
  staging_token:
  revprox_key:
  revprox_cert:
Copy the code

network

When Docker is deployed from a stack file, the first step is to check and create the network corresponding to the networks: keyword. Overlays are created by default and the control layer is encrypted. To encrypt the data layer, specify encrypted:’yes’ under driver_opts of the stack file. Data layer encryption incurs additional overhead, but generally does not exceed 10%.

networks:
  front-tier:
  back-tier:
  payment:
    driver: overlay
    driver_opts:
      encrypted: 'yes'
Copy the code

All three networks will be created prior to the key and service

The key

The Stack file currently defines four secret keys, all of which are external, which means that these keys must exist before the Stack can be deployed

secrets:
  postgres_password:
    external: true
  staging_token:
    external: true
  revprox_key:
    external: true
  revprox_cert:
    external: true
Copy the code

service

There are five services, and we analyze them one by one

  1. Reverse_proxy service

    reverse_proxy:
        image: dockersamples/atseasampleshopapp_reverse_proxy
        ports:
          - "80:80"
          - "443:443"
        secrets:
          - source: revprox_cert
            target: revprox_cert
          - source: revprox_key
            target: revprox_key
        networks:
          - front-tier
    Copy the code
    • image: Mandatory, specifying the Docker image to build the service copy
    • ports: Port 80 of the Swarm node is mapped to port 80 of the replica, and port 443 is mapped to port 443 of the replica
    • secrets: 2 keys are mounted to the service copy as a normal file with the name of the file as the value of the target property and the path as/run/secrets
    • networks: All replicas are connected to the front-tier network. If an Overlay network does not exist, Docker creates another one in the form of an Overlay network
  2. The database service

    database: image: dockersamples/atsea_db environment: POSTGRES_USER: gordonuser POSTGRES_DB_PASSWORD_FILE: /run/secrets/postgres_password POSTGRES_DB: atsea networks: - back-tier secrets: - postgres_password deploy: placement: constraints:          - 'node.role == worker'
    Copy the code

    The following items are added:

    • environment: environment variable that defines the database user, password location, and database name
    • deploy: Deployment constraints. The service only runs on Worker nodes in the Swarm cluster

    Swarm currently allows the following deployment constraints:

    • The node ID:node.id == 85v90bioyy4s2fst4fa5vrlvf
    • Node name:node.hostname == huanzi-002
    • Node role:node.role ! = manager
    • Node engine label:Engine. Labels. Operatingsystem = = Centos 7.5
    • Node custom tags:node.labels.zone == test01

    Support == and! = operator.

  3. Appserver service

    appserver:
        image: dockersamples/atsea_app
        networks:
          - front-tier
          - back-tier
          - payment
        deploy:
          replicas: 2
          update_config:
            parallelism: 2
            failure_action: rollback
          placement:
            constraints:
              - 'node.role == worker'
          restart_policy:
            condition: on-failure
            delay: 5s
            max_attempts: 3
            window: 120s
        secrets:
          - postgres_password
    Copy the code
    • deploy-replicas: Number of service copies deployed
    • deploy-update_config: Operations performed during the rolling upgrade, when two copies are updated each time (parallelism: 2), and when the upgrade fails, the rollback (failure_action: rollback)
    • failure_actionThe default value is pause, which prevents the upgrade of other copies after a service upgrade failure. Continue is also supported
    • restart_policy: Restart policy of the container that exits abnormally. The current policy is as follows: If a copy exits with a non-0 value (condition: on-failure), the current copy is restarted immediately. Restart the copy for a maximum of three times.
  4. Visualizer service

    visualizer:
        image: dockersamples/visualizer:stable
        ports:
          - "8001:8080"
        stop_grace_period: 1m30s
        volumes:
          - "/var/run/docker.sock:/var/run/docker.sock"
        deploy:
          update_config:
            failure_action: rollback
          placement:
            constraints:
              - 'node.role == manager'
    Copy the code
    • stop_grace_period: Set the graceful stop time (when Docker stops a container, it sends a SIGTERM signal to the process whose PID is 1 in the container, and the process whose PID is 1 in the container has a graceful stop time of 10 seconds to clean up)
    • volumes: Mount the volume or host directory created in advance to a service copy, as shown in this example/var/run/docker.sockIs the IPC socket of Docker, through which Docker Daemons expose API terminals to other processes. If a container has access to this file, that container is allowed to access all API terminals, and can query and manage Docker daemons.Do not perform this operation in production environments
  5. Payment_gateway service

    payment_gateway:
        image: dockersamples/atseasampleshopapp_payment_gateway
        secrets:
          - source: staging_token
            target: payment_token
        networks:
          - payment
        deploy:
          update_config:
            failure_action: rollback
          placement:
            constraints:
              - 'node.role == worker'
              - 'node.labels.pcidss == yes'
    Copy the code

    Node. labels: User-defined node labels. Docker node update can be used to add tags to designated nodes of Swarm cluster. The node.labels configuration only applies to the nodes specified in the Swarm cluster.

Deploy the docker stack

The preparatory work

  • Custom tags (required by the Payment_gateway service)
  • Keys (4 created in advance)

Create a custom tag on the work node Huanzi-002 and operate it on the management node

[root@huanzi-001 atsea-sample-shop-app]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 8 bet9fg0tnoqlfp0ebrrqdapn * huanzi - 001 Ready Active Leader 19.03.5 85 v90bioyy4s2fst4fa5vrlvf huanzi - 002 Ready Active 19.03.5 8 hxs2p5iblj19xg9uqpu8ar8g huanzi - 003 Ready Active 19.03.5 [root @ atsea huanzi - 001 - sample - shop - app] # docker node update --label-add pcidss=yes huanzi-002 huanzi-002 [root@huanzi-001 atsea-sample-shop-app]# docker node inspect huanzi-002 [ { "ID": "85v90bioyy4s2fst4fa5vrlvf", "Version": { "Index": 726 }, "CreatedAt": "2020-02-02T08:11:34.982719258z ", "UpdatedAt":" 2020-02-06T10:22:25.44331302z ", "Spec": {"Labels": {" pcIDSS ": "yes" <...>Copy the code

You can see that the custom label has been created successfully.

Next create the key, first create the encryption key

[root@huanzi-001 daemon]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout damain.key -x509 -days 365 -out domain.crt  Generating a 4096 bit RSA private key .................................... + +... ++ writing new private key to 'damain.key' ----- <... > Country Name (2 letter code) [XX]: State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []: Email Address []: [root@huanzi-001 daemon]# ls atsea-sample-shop-app damain.key domain.crtCopy the code

Create revprox_cert, Revprox_key, and postgres_password keys that require encryption keys

[root@huanzi-001 daemon]# docker secret create revprox_cert domain.crt 
lue5qk6ophxrr6aspyhnkhvsv
[root@huanzi-001 daemon]# docker secret create revprox_key damain.key 
glvfk78kn6665lmkci7tslrw6
[root@huanzi-001 daemon]# docker secret create postgres_password damain.key 
pxdfs28hb2897xuu7f3bub7ex
Copy the code

Create a staging_token key that does not require an encrypted key

[root@huanzi-001 daemon]# echo staging | docker secret create staging_token - cyqfn9jocvnxd2vr57gn5pioj [root@huanzi-001  daemon]# docker secret ls ID NAME DRIVER CREATED UPDATED pxdfs28hb2897xuu7f3bub7ex postgres_password 15 minutes ago 15 minutes ago lue5qk6ophxrr6aspyhnkhvsv revprox_cert 16 minutes ago 16 minutes ago glvfk78kn6665lmkci7tslrw6 revprox_key 16 minutes ago 16 minutes ago cyqfn9jocvnxd2vr57gn5pioj staging_token About a minute ago About a minute agoCopy the code

Now the custom tag and key are all created.

Start the deployment

Docker stack deploy -c


[root@huanzi-001 atsea-sample-shop-app]# docker stack deploy -c docker-stack.yml huanzi-stack
Creating network huanzi-stack_front-tier
Creating network huanzi-stack_back-tier
Creating network huanzi-stack_default
Creating network huanzi-stack_payment
Creating service huanzi-stack_payment_gateway
Creating service huanzi-stack_reverse_proxy
Creating service huanzi-stack_database
Creating service huanzi-stack_appserver
Creating service huanzi-stack_visualizer
Copy the code

As you can see, four networks have been created and then the service has been created. Let’s verify that the network has been created

[root@huanzi-001 atsea-sample-shop-app]# docker network ls
NETWORK ID          NAME                      DRIVER              SCOPE
34306420befb        bridge                    bridge              local
ac57c15024c7        docker_gwbridge           bridge              local
e863472805b3        host                      host                local
ojt9cxg2qsxe        huanzi-net                overlay             swarm
o74roe621idx        huanzi-stack_back-tier    overlay             swarm
k55m237m11ct        huanzi-stack_default      overlay             swarm
idpvc5xg2g2t        huanzi-stack_front-tier   overlay             swarm
uvphcut0a825        huanzi-stack_payment      overlay             swarm
7d6iv5ilwbcn        ingress                   overlay             swarm
d302c895b455        lovehuanzi                bridge              local
eefd134326c4        none                      null                local
Copy the code

See 4 networks with the huanzi-stack prefix. “Huanzi-stack-default” is added because the Visualizer service does not specify a network, so Docker creates a defalut network for it to use.

Verify the service again

root@huanzi-001 atsea-sample-shop-app]# docker stack ls
NAME                SERVICES            ORCHESTRATOR
huanzi-stack        5                   Swarm
Copy the code
[root@huanzi-001 atsea-sample-shop-app]# docker stack ps huanzi-stack 
ID                  NAME                             IMAGE                                                     NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
ex55yaz21mra        huanzi-stack_appserver.1         dockersamples/atsea_app:latest                            huanzi-003          Running             Preparing 2 minutes ago                       
jshmzquzxi8p        huanzi-stack_database.1          dockersamples/atsea_db:latest                             huanzi-002          Running             Preparing 2 minutes ago                       
k7mi1419ahwd        huanzi-stack_reverse_proxy.1     dockersamples/atseasampleshopapp_reverse_proxy:latest     huanzi-003          Running             Preparing 2 minutes ago                       
09ocoutjfc70        huanzi-stack_payment_gateway.1   dockersamples/atseasampleshopapp_payment_gateway:latest   huanzi-002          Running             Preparing 2 minutes ago                       
y6lftn8g95b8        huanzi-stack_visualizer.1        dockersamples/visualizer:stable                           huanzi-001          Running             Preparing 2 minutes ago                       
5twm1k4uj5ps        huanzi-stack_appserver.2         dockersamples/atsea_app:latest                            huanzi-002          Running             Preparing 2 minutes ago    
Copy the code

You can see that the stack file meets the requirements:

  • reverse_proxy: Number of copies1
  • database: Number of copies1, located in theworker
  • appserver: Number of copies2, located in theworker
  • visualizer: Number of copies1, located in themanager
  • payment_gateway: Number of copies1, located in theworker, user-defined labelspcidss == yes(i.e. huanzi – 002)

Management of the Stack

  1. capacity

    There are two ways to increase the number of appServer copies from 2 to 10:

    • throughdocker service scale appserver=10
    • Directly modifyingdocker-stack.ymlFile, pass againdocker stack deployredeploy

    All changes should be declared using the Stack file and then deployed using the Docker Stack deploy

    Modify the docker-stack.yml file

    appserver:
        image: dockersamples/atsea_app
        networks:
          - front-tier
          - back-tier
          - payment
        deploy:
          replicas: 10
    Copy the code

    redeploy

    [root@huanzi-001 atsea-sample-shop-app]# docker stack deploy -c docker-stack.yml huanzi-stack 
    Updating service huanzi-stack_reverse_proxy (id: i2yn8l50ofnmbx0a55mum1dw0)
    Updating service huanzi-stack_database (id: ubrtixblmj685pnc97wql42cm)
    Updating service huanzi-stack_appserver (id: yy447jdp1eiwb03ljdsqtyg1g)
    Updating service huanzi-stack_visualizer (id: rhzzxov0jh1y38rxcj6bwe89y)
    Updating service huanzi-stack_payment_gateway (id: niobpxv5vr1njoo37vnje8zic)
    Copy the code

    View the redeployed stack

    docker stack ps huanzi-stack 
    Copy the code

    Capacity expansion is complete.

  2. delete

    Docker stack rm

    [root@huanzi-001 atsea-sample-shop-app]# docker stack rm huanzi-stack 
    Removing service huanzi-stack_appserver
    Removing service huanzi-stack_database
    Removing service huanzi-stack_payment_gateway
    Removing service huanzi-stack_reverse_proxy
    Removing service huanzi-stack_visualizer
    Removing network huanzi-stack_front-tier
    Removing network huanzi-stack_default
    Removing network huanzi-stack_back-tier
    Removing network huanzi-stack_payment
    Copy the code

    Rm deletes the service and network, but the key and volume are not deleted

    root@huanzi-001 atsea-sample-shop-app]# docker secret lsID NAME DRIVER CREATED UPDATEDpxdfs28hb2897xuu7f3bub7ex postgres_password 53 minutes ago 53 minutes agolue5qk6ophxrr6aspyhnkhvsv revprox_cert 55 minutes ago 55 minutes agoglvfk78kn6665lmkci7tslrw6 revprox_key 54 minutes ago 54 minutes agocyqfn9jocvnxd2vr57gn5pioj staging_token 40 minutes  ago 40 minutes agoCopy the code

    Typically, an environment requires a stack file. Such as dev, test, prod.

Docker Secret

Reference: www.cnblogs.com/shenjianpin…

What is Docker Secret

  1. Scene show

    We know that some service is need to set the password, such as mysql service is the need to set the password:

    version: '3'
    services:
      web:
        image: wordpress
        ports:
          - 8080: 80
        volumes:
          - ./www:/var/www/html
        environment:
          WORDPRESS_DB_NAME=wordpress
          WORDPRESS_DB_HOST: mysql
          WORDPRESS_DB_PASSWORD: root
        networks:
          - my-network
        depends_on:
          - mysql
        deploy:
          mode: replicated
          replicas: 3
          restart_policy:
            condition: on-failure
            delay: 5s
            max_attempts: 3
          update_config:
            parallelism: 1
            delay: 10s
      mysql:
        image: mysql
        environment:
          MYSQL_ROOT_PASSWORD: root
          MYSQL_DATABASE: wordpress
        volumes:
          - mysql-data:/var/lib/mysql
        networks:
          - my-network
        deploy:
          mode: global
          placement:
            constraints:
              - node.role = = manager
    volumes:
      mysql-data:
    networks:
      my-network:
        driver: overlay
    Copy the code

    As you can see, the two service passwords in docker-comemage. yml are in plain text, so it is not very secure. What is Docker secret and can you solve the above problems?

  2. Docker Secret

    We know that the consistency of the state of the Manager node is maintained through Raft Database, which is a distributed storage Database and secret information itself, so you can use this Database to store sensitive information such as account number and password. Then authorize the service to access the password so that the password is not displayed in plain text.

    In short, secret management in Secret’s Swarm is completed by the following steps:

    • Secret exists in Raft Database of Swarm Manager node
    • A secret can be assigned to a service, which then sees the secret
    • Inside the Container, secret looks like a file but is actually memory

Docker Secret creation and use

create

Secret can be created in two ways:

  • File-based creation
  • Command line based creation
  1. File-based creation

    Start by creating a file to store your password

    [root@centos-7 ~]# vim mysql-password
    root
    Copy the code

    Then create secret

    [root@centos-7 ~]# docker secret create mysql-pass mysql-password 
    texcct9ojqcz6n40woe97dd7k
    Copy the code

    The name of secret is mysql-pass, and the name of mysql-password is mysql-password, which is used to store the password in Raft Database of manager node in Swarm. To be safe, delete the file now because Swarm already has this password.

    [root@centos-7 ~]# rm -f mysql-password 
    Copy the code

    Now take a look at the Secret list:

    [root@centos-7 ~]# docker secret ls
    ID                          NAME                DRIVER              CREATED             UPDATED
    texcct9ojqcz6n40woe97dd7k   mysql-pass                              4 minutes ago       4 minutes ago
    Copy the code

    It already exists.

  2. Command line based creation

    [root@centos-7 ~]# echo "root" | docker secret create mysql-pass2 -hrtmn5yr3r3k66o39ba91r2e4[root@centos-7 ~]# docker secret lsID                          NAME                DRIVER              CREATED             UPDATEDtexcct9ojqcz6n40woe97dd7k   mysql-pass                              6 minutes ago       6 minutes agohrtmn5yr3r3k66o39ba91r2e4   mysql-pass2                             5 seconds ago       5 seconds ago
    Copy the code

    This is a very simple way to create success

Other operating

  1. inspect

    Show some details about Secret

    [root@centos-7 ~]# docker secret inspect mysql-pass2
    [
        {
            "ID": "hrtmn5yr3r3k66o39ba91r2e4",
            "Version": {
                "Index": 4061
            },
            "CreatedAt": "2020-02-07T08:39:25.630341396Z",
            "UpdatedAt": "2020-02-07T08:39:25.630341396Z",
            "Spec": {
                "Name": "mysql-pass2",
                "Labels": {}
            }
        }
    ]
    Copy the code
  2. rm

    Delete a secret

    [root@centos-7 ~]# docker secret rm  mysql-pass2
    mysql-pass2
    [root@centos-7 ~]# docker secret ls
    ID                          NAME                DRIVER              CREATED             UPDATED
    texcct9ojqcz6n40woe97dd7k   mysql-pass                              12 minutes ago      12 minutes ago
    Copy the code

Secret is used in single containers

  1. View secret in the container

    We created a secret. How do we start a service and then grant it to a particular service so it can see it? See if there are similar commands or arguments in the command that created the service:

    [root@centos-7 ~]# docker service create --help Usage: docker service create [OPTIONS] IMAGE [COMMAND] [ARG...]  Create a new service Options: --config config Specify configurations to expose to the service ... --secret secret Specify secrets to expose to the service ... .Copy the code

    There is indeed a command that exposes secret to the service when it is created.

  2. Create a service

    [root@centos-7 ~]# docker service create --name demo --secret mysql-pass busybox sh -c "while true; do sleep 3600; done"
    zwgk5w0rpf17hn77axz6cn8di
    overall progress: 1 out of 1 tasks 
    1/1: running   
    verify: Service converged 
    Copy the code

    Check which node the service is running on:

    [root@centos-7 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS zwgk5w0rpf17 demo replicated 1/1 busybox:latest [root@centos-7 ~]# docker service ps demo ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS yvr9lwvg8oca demo.1  busybox:latest localhost.localdomain Running Running 51 seconds agoCopy the code

    As you can see, the service is running on the localhost. Localdomain node. Let’s go inside the container and see if we can see secret:

    [root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 36573adf21f6 busybox:latest "sh -c 'while true; ..." 4 minutes ago Up 4 minutes demo.1.yvr9lwvg8ocatym20hdfublhd [root@localhost ~]# docker exec -it 36573adf21f6 /bin/sh / #  ls bin dev etc home proc root run sys tmp usr var / # cd /run/secrets /run/secrets # ls mysql-pass /run/secrets # cat mysql-pass root /run/secrets #Copy the code

    And you can see that it does work.

Secret is used in the Stack

Docker-comemess. yml file is used to deploy the Stack. How do you define secret in docker-comemess. yml?

version: '3'

services:

  web:
    image: wordpress
    ports:
      - 8080: 80
    secrets:
      - my-pw
    environment:
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DB_PASSWORD_FILE: /run/secrets/wordpress-pass
    networks:
      - my-network
    depends_on:
      - mysql
    deploy:
      mode: replicated
      replicas: 3
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
      update_config:
        parallelism: 1
        delay: 10s

  mysql:
    image: mysql
    secrets:
      - my-pw
    environment:
      MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql-pass
      MYSQL_DATABASE: wordpress
    volumes:
      - mysql-data:/var/lib/mysql
    networks:
      - my-network
    deploy:
      mode: global
      placement:
        constraints:
          - node.role = = manager

volumes:
  mysql-data:

networks:
  my-network:
    driver: overlay
Copy the code

We specify secret by defining WORDPRESS_DB_PASSWORD_FILE and MYSQL_ROOT_PASSWORD_FILE in the environment. Obviously we must create the secret file before we can run the docker-comemage. yml file. The stack can then be deployed using the Docker stack deploy command.

Docker Config

The official document: docs.docker.com/engine/refe…