Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”

What is cgroup?

Cgroups, short for Control groups, is a feature of the Linux kernel that restricts, controls, and separates a group of processes (CPU, memory, disk input, output, etc.).

What are Docker resource limits?

By default, the Docker container has no resource limits and uses as many resources as the host can allocate to it. If container resources are not restricted, containers affect each other. Some containers that occupy high hardware resources swallow up all hardware resources. As a result, other containers have no hardware resources available and the service stops. Docker provides methods to limit memory, CPU, or disk IO, and to limit the size and amount of hardware resources used by the container. We can limit the hardware resources of the container when we use Docker Create or Docker run to create a container.

Docker controls the resource quota used by containers through cgroup, including CPU, memory and disk, which basically covers common resource quota and usage control.

Limit the CPU usage of Docker

By default, all containers can use the CPU resources of the host equally and without limitation.

The options for setting CPU resources are as follows

  • -c or –cpu-shares: We can set the proportion of CPU time each container can use when there are multiple containers competing for CPU. This ratio is called shared weight. Shared CPU resources are proportionally segmented. Docker default weight per container is 1024. If it is not specified or set to 0, the default value is used.

On the current system, for example, running a total of two container, the weight on the first container is 1024, the second container weight is 512, the second container to start any process is not running after, the 512 themselves are unused, and the process of the first container has a lot of, this time it can take up the container 2 CPU idle resources, This is shared CPU resources; If container two also loses its own process, it will get its own 512 back, according to the normal weight of 1024:512, to provide CPU resources for its own process. If container 2 does not use CPU resources, container 1 can use CPU resources of container 2. If container 2 also needs CPU resources, it is proportionally divided. The first container will go from using the entire host CPU to using 2/3 of the host CPU; This is CPU sharing, and it proves that cpus are compressible resources.

  • –cpus: limits the number of cores a container can run; Since Docker 1.13, docker provides the –cpus argument that limits the number of CPU cores a container can use. This feature allows us to set the CPU usage of the container more precisely, and is a more understandable and commonly used tool.
  • –cpuset-cpus: limits containers to run on a specified CPU core; For example, if the host has four CPU cores and the CPU cores are identified as 0-3, I start a container and only want the container to run on two CPU cores identified as 0 and 3. This can be specified using cpuset.

Unlike the memory quota, the CPU share set through -c is not an absolute number of CPU resources, but a relative weight value. The amount of CPU resources a container can allocate depends on the proportion of its CPU share to the total CPU share of all containers. In other words, CPU share lets you set the priority of CPU usage by the container.

ContainerA has a CPU share of 1024, twice that of containerB.
ContainerA gets twice as much CPU as containerB when both containers require CPU resources.
It is important to note that this allocation of CPUS by weight only occurs when CPU resources are tight.
If containerA is idle, it can allocate all available CPUS to make full use of them.
docker run --name "cont_A" -c 1024 ubuntu docker run --name "cont_B" -c 512 ubuntu

The container can use up to two cpus on the host, and can specify decimal numbers such as 1.5.
docker run -it --rm --cpus=2 centos /bin/bash

# indicates that processes in the container can execute on CPU-1 and CPU-3.
docker run -it --cpuset-cpus="1, 3"Ubuntu 14.04 / bin/bash# indicates that processes in the container can run on CPU-0, CPU-1, and CPU-2.
docker run -it --cpuset-cpus="0-2"Ubuntu 14.04 / bin/bashCopy the code

Using -c or –cpu-shares is a relative restriction on CPU resources. Also, we can impose absolute limits on CPU resources.

Absolute limit of CPU resources

Linux uses the Completely Fair Scheduler (CFS) to schedule CPU usage by various processes. The default CFS scheduling period is 100ms.

We can set the scheduling cycle for each container process and the maximum amount of CPU time each container can use during that cycle.

  • –cpu-period Sets the scheduling period of each container process
  • Cpu-quota sets the CPU time the container can use in each cycle

Such as:

docker run -it --cpu-period=50000 --cpu-quota=25000 Centos centos /bin/bash

Indicates that the CFS scheduling period is set to 50000, and the CPU quota of the container is set to 25000 in each period, indicating that the container can get 50% CPU running time every 50ms.

Docker run it –cpu-period=10000 –cpu-quota=20000 Centos Centos /bin/bash Specifies that the CPU quota of the container is set to twice the CFS period. It’s easy to explain, just allocate two cpus to the container. This configuration means that the container can use both cpus 100% of the time in each cycle.

The CFS period ranges from 1ms to 1s. The cpu-period value ranges from 1000 to 1000000.

The value of cpu-quota must be greater than or equal to 1000. You can see that both options are in us.

How to correctly understand “absolute”?

Cpu-quota sets a limit on how much CPU a container can use during a scheduling period. This is not to say that the container will necessarily use this amount of CPU time.

Start a container and bind it to CPU 1 with –cpu-quota and –cpu-period set to 50000. Indicates that the scheduling period of each container process is 50000, and the container can use a maximum of 50000 CPU time in each period.

docker run -d --name mongo1 --cpuset-cpus 1 --cpu-quota=50000 --cpu-period=50000 docker.io/mongo
Copy the code

Docker Stats Mongo -1 Mongo -2 The container is not using 50000 CPU time per cycle.

Use the docker stop Mongo2 command to end the second container and start it with -c 2048:

docker run -d --name mongo2 --cpuset-cpus 1 --cpu-quota=50000 --cpu-period=50000 -c 2048 docker.io/mongo
Copy the code

Using the Docker stats mongo-1 mongo-2 command, you can see that the CPU usage of the first container is around 33%, and the CPU usage of the second container is around 66%. Because the second container’s shared value is 2048 and the first container’s default shared value is 1024, the second container can use twice as much CPU time per cycle as the first container.

conclusion

  • CPU share control: -c or –cpu-shares
  • CPU core control: –cpuset-cpus, –cpus
  • CPU period control: –cpu-period, –cpu-quota

Limit Docker memory usage

Similar to the operating system, the memory available to containers consists of two parts: physical memory and Swap.

Docker controls the amount of container memory used using two sets of parameters.

  • -m or –memory: sets the memory usage limit, for example, 100MB, 2GB.
  • –memory-swap: Sets the memory +swap usage limit.

By default, the above two sets of parameters are -1, which means there is no limit on container memory and swap usage. If you specify -m without –memory-swap when starting the container, then –memory-swap defaults to twice the value of -m.

Allow the container to use up to 200MB of memory and 100MB of swap.
docker run -m 200M --memory-swap=300M ubuntu


The container can use up to 200M memory and 200M Swap
docker run -it -m 200M ubuntu
Copy the code

Docker container limits disk IO

Block IO is another resource that can limit the container’s use. Block IO refers to disk read and write. Docker controls the bandwidth of a container’s read and write disks by setting weights and limiting BPS and IOPS

Note: Currently Block IO quotas are only valid for Direct IO (which does not use file caching).

How do I limit Block IO?

By default, all containers can read and write disks equally. You can change the priority of the container block IO by setting the –blkio-weight parameter. –blkio-weight is similar to –cpu-shares in that it sets the relative weight value, which defaults to 500. In the following example, container_A has twice the bandwidth of container_B to read and write disks.

docker run -it --name container_A --blkio-weight 600 ubuntu
docker run -it --name container_B --blkio-weight 300 ubuntu
Copy the code

How do I limit BPS and IOPS?

BPS is byte per second, indicating the amount of data read and written per second.

Iops is IO per second, which indicates the input/output (or read/write times) per second.

The following parameters can be used to control the BPS and IOPS of containers:

  • –device-read-bps: limits the read BPS of a device.
  • –device-write-bps: limits the number of BPS written to a device.
  • — device-read-IOPS: limits the READ IOPS of a device.
  • — device-write-IOPS: specifies the IOPS for writing data to a device.

Tests that limit writing BPS

Limit the write rate of container /dev/sda to 30 MB/s.

docker run -it --device-write-bps /dev/sda:30MB centos:latest
Copy the code

Dd tests the speed at which disks are written to the container. Since the container’s file system is on /dev/sda of the host, writing files to the container is equivalent to writing to /dev/sda of the host. In addition, oflag=direct specifies that files are written in direct IO mode, so –device-write-bps takes effect.

time dd if=/dev/zero of=test.out bs=1M count800 oflag=direct
Copy the code

The parameters are described as follows:

  • If =file: Enter the file name. The default value is standard
  • Of =file: indicates the output file name. The default value is standard output
  • Ibs =bytes: reads bytes at a time (i.e. a block size of bytes)
  • Obs =bytes: Write bytes at a time (that is, a block size of bytes)
  • Bs =bytes: Sets the size of read and write blocks to bytes, which can replace iBS and OBS
  • count=blocks: only copyblocksBlocks, each block size equal to the number of bytes specified by iBS

Use the GPU in Docker

Docker is different from CPU, memory, and disk IO resources for GPU resources. If Docker wants to use GPU, Docker needs to support GPU. Before Docker19, it needs to download Nvidia-Docker1 or Nvidia-Docker2 separately to start the container. In Docker19, you only need to add the parameter — gpus (– gpus all indicates that all gpus are used; Use 2 Gpus: — gPUS 2; You can also specify which cards to use: –gpus ‘”device=1,2″‘) Docker does not require additional installation of nvidia-docker to read nvidia graphics cards.

Check whether it has--gpusparameter

docker run --help | grep -i gpus
Copy the code

Check whether the NVIDIA interface can be started

Run the image provided on the Nvidia official website and run the nvidia-smi command to check whether the Nvidia interface can be started.

Docker Run -- GPUS all NVIDIA/CUDA: 9.0-Base Nvidia-SMICopy the code

Use gpus in Docker containers

# Use all GpusDocker Run -- GPUS all NVIDIA/CUDA: 9.0-Base Nvidia-SMI# Use two GpusDocker Run -- GPUS 2 NVIDIA/CUDA: 9.0-Base NVIDIa-SMI# specify GPU to run
docker run --gpus '"device=2"'Nvidia/CUDA: 9.0-BASE NVIDIA - SMI Docker Run -- GPUS'" device = 1, 2, "'Nvidia/CUDA: 9.0-BASE NVIDIA - SMI Docker Run -- GPUS'"device=UUID-ABCDEF,1"'Nvidia/cuda: 9.0 - base nvidia - smiCopy the code

conclusion

This article explores the knowledge related to Docker resource limit. In daily development, a reasonable resource limit value should be set for the container to prevent the situation of insufficient hardware resources, which leads to Linux process error. It also explains how to assign a GPU to a Docker.