Original: https://blog.csdn.net/forezp/article/details/80171723

Part THREE: Service

The preparatory work

  • Install Docker 1.13 or later.
  • Install the Docker Compose
  • Read part 1 and Part 2.
  • Make sure you have posted the friendlyHello image to the Docker public repository.
  • Make sure your image works as a deployable container. Run this command to insert username, repo and tag: docker run -p 80:80 username/repo: tag into your info, then visit http:// localhost /.

introduce

In Part 3, we extended our application and implemented load balancing. To do this, we must upgrade one level in the hierarchy of distributed applications: services.

Heap services (you are here) Containers (covered in Part 2)

About service

In distributed applications, the different parts of the application are called “services.” For example, if you imagine a video-sharing site, it might include a service for storing application data in a database, a service for video transcoding in the background things that users upload, front-end services, and so on.

Services are really just “containers in production.” A service runs only one image, but it encodes how the image runs – which port should be used, how many copies the container should run so that the service has the required capacity, and so on. Scaling a service changes the number of container instances running the software, thus allocating more computing resources to the services in the process.

Fortunately, using the Docker platform definition, running and extending the service is very easy – just write a docker-comemage.yml file.

Your first docker-comedy.yml file

The docker-comemage. yml file is a YAML file that defines how docker containers behave in production.

docker-compose.yml

Save this file as docker-comemage. yml whenever you want to use it. Make sure you push the image you created in Part 2 to the registry repository, and update this.yml by replacing the username/repo: tag with your image.

version: "3"
services:
  web:
    # replace username/repo:tag with your name and image details
    image: username/repo:tag
    deploy:
      replicas: 5
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:

Copy the code

The docker-comemage. yml file tells Docker to do the following:

  • Pull out the image we uploaded in Part 2 from the registry.
  • The five instances of this image running as a service called Web are limited to using up to 10% of CPU (all cores) and 50MB of RAM per instance.
  • If one fails, restart the container immediately.
  • Map port 80 on the host to port 80 on the Web.
  • Indicates that the Web container shares port 80 over a load-balancing network called WebNet. Internally, the container itself publishes to the Web on port 80, a temporary port.
  • Define the WebNet network using the default Settings (this is a load-balanced coverage network).

Run your load balancing application

Before we can use the Docker Stack deploy command, we first run:

docker swarm init

Copy the code

Now let’s run it. You need to give your application a name. In this case, call it getStartedLab:

docker stack deploy -c docker-compose.yml getstartedlab

Copy the code

Our single service stack ran five container instances of our deployed images on a single host.

Get the service ID of a service in our application:

docker service ls

Copy the code

Find the output of the Web service and prefix it with your application name. If you name it the same as shown in this example, the name is getStartedlab_web. Service ids are also listed along with the number of replicas, image names, and port exposures.

Individual containers that run within a service are called tasks. The task gets a unique ID that increases in value, up to the number of replicas you defined in docker-comemage.yml. List the tasks of your service:

docker service ps getstartedlab_web

Copy the code

If you list only all containers in the system, but also do not show the tasks for service filtering, the tasks will also show:

docker container ls -q
Copy the code

You can run curl -4 http:// localhost multiple times in a row, or access the URL in your browser and refresh it several times.

Either way, the container ID changes to indicate load balancing; On each request, one of five tasks is selected in a circular strategy to respond. The container ID matches the output from the previous command (docker container ls -q).

Expand your application

You can extend the application by changing the value of the number of copies in docker-comemage.yml, saving the changes and re-running the Docker stack deploy command:

docker stack deploy -c docker-compose.yml getstartedlab

Copy the code

Docker performs an in-place update without tearing down the stack or killing any containers first.

Now re-run the Docker Container ls -q to see the reconfigured deployed instance. If you increase the number of copies, you start more tasks, which in turn starts more containers.

Shut down the app and Swarm

  • Docker stack rm command to disable the application:
docker stack rm getstartedlab
Copy the code
  • Close the swarm
docker swarm leave --force

Copy the code

Using Docker to upgrade and extend applications is just as easy. You’ve taken a big step towards learning how to run containers in production. Next, you’ll learn how to run this application as a real colony on a cluster of Docker machines.

review

All in all, it is very simple to input Docker Run, and the real implementation of a container in production is to run it as a service. The service writes the container’s behavior in the Compose file, which can be used by the container to expand, limit, and redeploy our application. Changes to the service can be applied at run time using the same command used to start the service: docker stack deploy.

Some commands to explore at this stage are as follows:

docker stack ls                                            # List stacks or apps
docker stack deploy -c <composefile> <appname>  # Run the specified Compose file
docker service ls                 # List running services associated with an app
docker service ps <service>                  # List tasks associated with an app
docker inspect <task or container>                   # Inspect task or container
docker container ls -q                                      # List container IDs
docker stack rm <appname>                             # Tear down an application
docker swarm leave --force      # Take down a single node swarm from the manager

Copy the code

Pay attention to me: