1. Limit the resources of the container

By default, containers have no resource constraints and can use as many given resources as the host kernel scheduler allows. Docker provides a way to control how much memory or CPU a container can use, setting the runtime configuration flag of the Docker run command. This article provides detailed information on when such limits should be set and what they might mean.

Many of these features require that your kernel support Linux functionality. To check the support, use the Docker info command. If a feature is disabled in the kernel, you may see a WARNING at the end of the output that looks like this: WARNING: No swap limit support, see the operating system documentation to enable them for more information.

[root@along ~]# docker infoContainers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 43 Server Version: 17.03.2- CE Storage Driver: overlay Backing Filesystem: xfs Supports d_type:true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: localNetwork: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe init version: 949e6fa Security Options: seccomp Profile: Kernel Version: 3.10.0-514.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: Linux Architecture X86_64 CPUs: 1 Total Memory: 976.5 MiB Name: along ID: KGWA:GDGT:PTK7:PAX4:A3JZ:LV6N:U5QD:UQCY:AGZV:P32E:V73T:JJHR Docker Root Dir: /var/lib/docker Debug Mode (client):false
Debug Mode (server): false
Username: alongedu
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 docker2:80
 127.0.0.0/8
Registry Mirrors:
 https://registry.docker-cn.com
Live Restore Enabled: falseCopy the code

2, memory,

2.1 Risk of Insufficient memory

It is important not to have running containers take up too much host memory. On Linux hosts, if the kernel detects that it does not have enough Memory to perform important system functions, it throws an OOME or Out Of Memory Exception and begins killing processes to free up Memory. Any process will be killed, including Docker and other important applications. This can effectively degrade the entire system if the wrong process is killed.

Docker tries to mitigate these risks by adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system. The OOM priority on the container is not adjusted. This makes it more likely that a single container will be killed than a Docker daemon or other system process. You should not attempt to circumvent these security measures by manually setting extreme negative numbers on daemons or containers by –oom-score-adj or by setting containers –oom-kill-disable.

For more information about OOM management for the Linux kernel, see Out of Memory Management.

You can reduce the risk of system instability caused by OOME by:

  • Before putting your application into production, perform tests to understand the memory requirements of your application.
  • Make sure your application runs only on hosts with sufficient resources.
  • Limit the amount of memory a container can use, as described below.
  • Be careful when configuring switches on Docker hosts. Swap is slower and performs less well than memory, but provides buffering to prevent the system from running out of memory.
  • Consider turning containers into services and using service-level constraints and node labels to ensure that applications only run on hosts with sufficient memory

2.2 Limit the container’s memory Settings

Docker can enforce hard memory limits, allowing containers to use up to a given amount of user or system memory or soft limits, which allow containers to use as much memory as possible unless certain conditions are met, such as low memory or contention detected by the kernel on the host. When multiple options are used alone or set, some of them have different effects.

Most of the options are positive integers followed by a suffix b, k, m, g,, for byte, kilobyte, Megabyte, or gigabyte.

options

describe

-m or –memory=

Maximum amount of memory a container can use. If this option is set, the minimum allowed value is 4m.

–memory-swap*

The amount of memory this container is allowed to swap to disk.

–memory-swappiness

By default, the host kernel can swap the percentage of anonymous pages used by containers. You can adjust this percentage by setting a value between –memory-swappiness 0 and 100.

–memory-reservation

Allows you to specify a soft limit less than the software limit –memory, which is activated when Docker detects contention or low memory on the host. If you use –memory-reservation, you must set it to a lower priority than –memory. Because it is a soft limit, there is no guarantee that the container will not exceed the limit.

–kernel-memory

The maximum amount of kernel memory a container can use. The minimum allowable value is 4m. Since kernel memory cannot be swapped out, containers with insufficient kernel memory can block host resources, which can have side effects on hosts and other containers.

–oom-kill-disable

By default, the kernel terminates the process in the container if an out of memory (OOM) error occurs. To change this behavior, use the –oom-kill-disable option. Disable OOM Killer only on containers where the -m/–memory option is set. If -m does not set this flag, the host may run out of memory, and the kernel may need to terminate processes on the host system to free memory.

2.2.1 — – swap memory Settings

(1) Introduction

–memory-swap is a modifier flag that makes sense only if –memory is set. Using swap allows the container to write excess memory requirements to disk when the container runs out of all available RAM. For applications that frequently swap memory to disk, performance suffers.

(2) Its setting can produce complex effects:

  • If — –memory-swap is set to a positive integer, then both –memory and –memory-swap must be set. –memory-swap indicates how much memory and swap is available, and –memory controls how much non-swapped memory (physical memory) is used. So if –memory=”300m” and –memory-swap=”1g”, the container can use 300m of memory and 700 m (1g-300m) swap.
  • If –memory-swap is set to 0, the setting is ignored and the value is considered unset.
  • If –memory-swap is set to the same value as –memory, and –memory is set to a positive integer, then the container has no access to swap. See preventing containers from using exchanges below.
  • If –memory-swap is not set and –memory is set, the container can use a swap twice as large as –memory. The host container needs to be configured with a swap. For example, if –memory=”300m” and –memory-swap are not set, the container can use 300m of memory and 600 m of swap.
  • If –memory-swap is explicitly set to -1, containers are allowed to use unlimited swap up to as many swaps as are available on the host system.
  • Inside the container, tools such as Free report the host’s swap, not the actual memory available within the container. Do not rely on free or similar tools to determine the existence of swap.

(3) Prevent containers from using exchange

If –memory and –memory-swap are set to the same value, you can prevent the container from using swap. This is because –memory-swap can use memory and swap, while –memory is just the amount of physical memory available.

2.2.2 – the memory – swappiness Settings

  • A value of 0 will turn off anonymous page exchanges.
  • A value of 100 sets all anonymous pages to be exchangeable.
  • By default, if –memory-swappiness is not set, the value is inherited from the host.

Then, the kernel – the memory Settings

(1) Introduction

The kernel memory limit is expressed as the total memory allocated to the container. Consider the following options:

  • Unlimited memory, unlimited kernel memory: this is the default setting.
  • Unlimited memory, limited kernel memory: This is appropriate when the amount of memory required for all cgroups is greater than the amount of memory that actually exists on the host. You can configure kernel memory so that it never overwrites what’s available on the host, and containers that need more memory need to wait for it.
  • Limited memory, unlimited kernel memory: Limited overall memory, but unlimited kernel memory.
  • Limited memory, limited kernel memory: Limiting user and kernel memory is very useful for debugging memory-related problems. If a container uses an unexpected amount of memory of any type, it is out of memory without affecting other containers or hosts. In this setting, if the kernel memory limit is below the user memory limit, the container will experience an OOM error because the kernel is out of memory. If the kernel memory limit is higher than the user memory limit, the kernel limit does not cause the container to encounter OOM.

When you turn on any kernel memory limits, the host keeps track of “high-water mark” statistics on a per-process basis, so you can track which processes (in this case, containers) are using excess memory. You can see this in each process by viewing it on the host through /proc/<PID>/status.

3, CPU,

  • By default, each container has unlimited access to the host CPU cycle.
  • You can set various constraints to limit the CPU cycles for a given container to access the host.
  • Most users use and configure the default CFS scheduler.
  • In Docker 1.13 and later, you can also configure the real-time scheduler.

3.1 Configuring the default CFS scheduler

CFS is a Linux kernel CPU scheduler for normal Linux processes. Multiple runtime flags allow you to configure the number of CPU resource visits a container has. When using these Settings, Docker modifies the cgroup Settings for the container on the host.

options

describe

–cpus=

Specifies the amount of available CPU resources that the container can use. For example, if the host has two cpus and you have –cpus=”1.5″, the container guarantees at most one and a half cpus. This is equivalent to setting –cpu-period=”100000″ and –cpu-quota=”150000″. Available in Docker 1.13 and later.

–cpu-period=<value>

Specifies the CPU CFS scheduler cycle, which is used in conjunction with –cpu-quota. The default is 100 microseconds. Most users do not change the default Settings. If you are using Docker 1.13 or later, use –cpus.

–cpu-quota=<value>

Apply CPU CFS quotas to containers. –cpu-period Specifies the number of microseconds per second the container is limited to before the limitation. As an effective upper limit. If you are using Docker 1.13 or later, –cpus instead.

–cpuset-cpus

Limits the specific CPU or core a container can use. If you have more than one CPU, the container can use a comma-separated list or a hyphenated range of cpus. The first CPU is numbered 0. Valid values may be 0-3 (using the first, second, third, and fourth cpus) or 1,3 (using the second and fourth cpus).

–cpu-shares

Set this flag to a value greater than or less than the default value of 1024 to increase or decrease the weight of the container and make it accessible to a larger or smaller percentage of the host’s CPU cycles. This operation is enforced only when the CPU cycle is limited. When there are enough CPU cycles, all containers use as much CPU as they need. So, this is a soft limit. Cpu-shares does not prevent containers from scheduling in clustered mode. It prioritizes container CPU resources for available CPU cycles. It does not guarantee or reserve any specific CPU access rights.

4. Operation demonstration

4.1 Preparations

(1) Query host resources first:

[root@docker ~]Lscpu CPU resources
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 60
Model name:            Intel(R) Xeon(R) CPU E3-1231 v3 @ 3.40GHz
Stepping:              3
CPU MHz:               3395.854
BogoMIPS:              6792.17
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-3
[root@docker ~]# free -h memory, swap resourcesTotal Used Free shared Buff/Cache available Mem: 7.8G 193M 7.2g 8.6m 438M 7.3g Swap: 2.0G 400K 2.0GCopy the code

(2) Download an image for pressure measurement from DockerHub

[root@docker ~]# docker pull lorel/docker-stress-ngCopy the code

(3) The use of the pressure measurement mirror

[root@docker ~]# docker run --name stress --rm lorel/docker-stress-ng:latest stress --helpCopy the code

Use –help to query the usage of the compression mirror

Ex. :

stress-ng --cpu 8 --io 4 --vm 2 --vm-bytes 128M --fork 4 --timeout 10sCopy the code

Grammar:

  • -c N, — CPU N Start N child processes (CPU)
  • — VM N Starts N processes to pressure memory
  • — VM-bytes 128M How much memory is used by each child process (default: 256M)

4.2 Testing memory limits

(1) Now use maximum memory to start the container

[root@docker ~]# docker run --name stress --rm -m 256m lorel/docker-stress-ng:latest stress --vm 2
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 vm

[root@docker ~]# docker stats stressCONTAINER ID NAME CPU % MEM USAGE/LIMIT MEM % NET I/O BLOCK I/O PIDS e1FDB0520Bad Stress 8.22% 254MiB / 256MiB 99.22% 648B / 0B 46.9MB / 3.63GB 5Copy the code

Note:

  • -m 256m Limits the maximum memory usage of the container to 256 MB.
  • — VM 2 starts the pressure container, using 256×2= 512M memory;
  • The actual memory used by the container cannot exceed 256M

4.3 Testing CPU Limits

(1) Limit the maximum use of 2-core CPU

[root@docker ~]# docker run --name stress --rm --cpus 2 lorel/docker-stress-ng:latest stress --cpu 8
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 8 cpu

[root@docker ~]# docker stats stressCONTAINER ID NAME CPU % MEM USAGE/LIMIT MEM % NET I/O BLOCK I/O PIDS CA86C0DE6431 Stress 199.85% 15.81MiB / 7.781GiB 0B 0B / 0B 9Copy the code

(2) The number of CPU cores is not limited

[root@docker ~]# docker run --name stress --rm lorel/docker-stress-ng:latest stress --cpu 8
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 8 cpu

[root@docker ~]# docker stats stressCONTAINER ID NAME CPU % MEM USAGE/LIMIT MEM % NET I/O BLOCK I/O PIDS 167AFeAC7C97 Stress 399.44% 15.81MiB / 7.781GiB 0B 0B / 0B 9Copy the code

Welcome to join the Java architecture exchange: 855835163 Group provides free Java architecture learning materials (which have high availability, high concurrency, high performance and distributed, Jvm performance tuning, Spring source code, MyBatis, Netty, Redis, Kafka, Mysql, Zookeeper, Tomcat, Docker, Dubbo, multiple knowledge architecture data, such as the Nginx) reasonable use their every minute and second time to learn to improve yourself, don’t use “no time” to hide his ideas on the lazy! While young, hard to fight, to the future of their own account!