docker run

\

  1. Usage: docker run [OPTIONS] IMAGE [COMMAND]``[ARG...]


-a

  1. -a,``--attach=[]``Attach to STDIN, STDOUT or STDERR

If -a is not specified when the run command is executed, docker defaults to mounting all standard data streams, including input, output and errors. You can specify which standard streams to mount.

  1. $docker run -a stdin -a stdout -i -t Ubuntu :14.04 ' '/bin/bash
  2. (Mount standard I/O only)


–add-host

  1. --add-host=[]``Add a custom host-to-IP mapping (host:ip)

Add host-ip to the container’s hosts file

  1. $docker run it --add-host db:192.168.1.1 Ubuntu :14.04 ' '/bin/bash
  2. root@70887853379d:/# cat /etc/hosts
  3. 172.17.0.2 ` ` d 70887853379
  4. 127.0.0.1 localhost
  5. ::1 localhost ip6-localhost ip6-loopback
  6. fe00::0 ip6-localnet
  7. ff00::0 ip6-mcastprefix
  8. ff02::1 ip6-allnodes
  9. ff02::2 ip6-allrouters
  10. 192.168.1.1 db


–blkio-weight

  1. --blkio-weight=0``Block IO (relative weight), between 10 and 1000

Compared with the quota control of CPU and memory, Docker’s control of disk IO is relatively immature, and most of them must be used in the case of host devices. Includes the following parameters:

  • Device-read-bps: sets the read speed (bytes per second) on this device. The unit can be KB, MB, or GB.
  • – device-read-IOPS: limits the read speed of the specified device by the number of READ I/OS per second.
  • – device-write-bps: specifies the write speed (bytes per second) of the device. The unit can be KB, MB, or GB.
  • – device-write-IOPS: limits the write speed of a specified device by the number of WRITE I/OS per second.
  • – blkio-weight: specifies the default disk I/O weight value of the container. The value ranges from 10 to 100.
  • – blkio-weight-device: indicates I/O weighting control for a specific device. The format is DEVICE_NAME:WEIGHT. For details about quota control parameters, see Blkio in the Red Hat documentation.

Example for disk I/O quota control blkio-weight To make -blkio-weight take effect, ensure that the I/O scheduling algorithm is CFQ. You can view it in the following way:

  1. root@ubuntu:~# cat /sys/block/sda/queue/scheduler
  2. noop [deadline] cfq

Use the following command to create two containers with different -blkio-weight values:

  1. $docker run-ti -rm -blkio-weight 100 Ubuntu :stress
  2. $docker run-ti -rm -blkio-weight 1000 Ubuntu :stress

Test this by executing the following dd command simultaneously in the container:

  1. time dd if=/dev/zero of=test.out bs=1M count=1024 oflag=direct

Device-write-bps creates a container using the following command and executes the command to verify the write speed limit.

  1. $ docker run -tid –name disk1 –device-write-bps /dev/sda:1mb ubuntu:stress

Container space limit When Docker uses Devicemapper as storage driver, the default maximum size of each container and image is 10GB. Dm. basesize can be used in daemon startup parameters to change this value, but it is important to note that changing this value will not only restart the Docker Daemon service, but also cause all local images and containers on the host to be cleaned up. There is no such limitation when using other storage drivers such as AUFS or overlay. –cidfile=

  1. --cidfile=``Write the container ID to the file

Save the Container ID to cid_file in the long UUID format

  1. $docker run-it --cidfile=cid_file Ubuntu :14.04 ' '/bin/bash
  2. #cat cid_file
  3. 5fcf835f2688844d1370e6775247c35c9d36d47061c4fc73e328f9ebf920b402


–cpu-shares

  1. --cpu-shares=0 CPU shares (relative weight)

By default, you can assign 1024 CPU sharing cycles to the active Container if the -c or –cpu-shares parameter is 0. This value of 0 can be modified for different CPU cycles for active Containers. For example, we started C0, C1, and C2 containers with -c or –cpu-shares= 0, and C3 containers with -c/–cpu-shares=512. At this point, C0, C1, and C2 can use 100% CPU resources (1024), but C3 can only use 50% CPU resources (512). If the host OS is of the scheduling type and each CPU slice is 100 microseconds, then C0, C1, C2 will use 100 microseconds completely, while C3 can only use 50 microseconds. –cpu-period, –cpu-quota

  1. --cpu-period=0``Limit CPU CFS (Completely``Fair``Scheduler) period
  2. --cpu-quota=0``Limit CPU CFS (Completely``Fair``Scheduler) quota

–cpu-period and –cpu-quota are used together. This configuration is called Ceiling Enforcement Tunable Parameters. This configuration of CPU-shares is called Relative Shares Tunable Parameters. –cpu-quota specifies how long CPU usage must be redistributed, while –cpu-quota specifies how much time can be used to run the container during this period. Unlike — CPU-shares, this configuration specifies an absolute value, and there is no elasticity in it. The container can never use more CPU resources than the configured value. For example, if –cpu-period=100000 –cpu-quota=50000, A can use up to 50% CPU resources. If –cpu-quota=200000, A can use up to 200% CPU resources. So what are the application scenarios? If –cpu-shares is used only, the resource usage of SERVICE B is too high. However, if –cpu-period and –cpu-quota are used, service A can be absolutely controlled. So that no matter what HAPPENS to B, it doesn’t affect A. The unit of cpu-period and cpu-quota is microseconds (μs). The minimum value of cpu-period is 1000 microseconds, the maximum value is 1 second (10^6 μs), and the default value is 0.1 second (100000 μs). The default value of cpu-quota is -1, indicating that the CPU is not controlled. –cpuset-cpus, –cpuset-mems

  1. --cpuset-cpus= ' 'cpus' in which to allow execution (0-3,' '0,1)
  2. --cpuset-mems= ' 'mems' in which to allow execution (0-3,' '0,1)

For servers with multi-core cpus, Docker also controls which CPU cores and memory nodes the container runs on, using -cpuset-cpus and -cpuset-mems parameters. This is especially useful for servers with NUMA topologies (multi-CPU, multi-memory nodes) to optimize the configuration of containers that require high performance computing. If the server has only one memory node, the — CPUSet-MEMS configuration will have little effect. An example command output is as follows: docker run-tid -name cpu1-cpuset-cpus 0-2 Ubuntu indicates that only the 0, 1, and 2 cores can be used in the created container. The final Cgroup CPU kernel configuration is as follows:

  1. # cat/sys/fs/cgroup/cpuset/docker/long ID > < integrity of the container/cpuset. Cpus
  2. 0-2

Docker exec < ID> taskset -c -p 1 docker exec < ID> taskset -c -p 1 docker exec < ID> taskset -c -p 1 -d, –detach

  1. -d,``--detach=false Run container in background and print container ID

Detached Mode If you append -d=true or -d to the docker run, containter will run in the background mode. In this case, all I/O data can be exchanged only through network resources or shared volume groups. Because Container no longer listens to the terminal command-line window where you execute docker Run. However, you can remount the container by performing Docker Attach. Note that if you run -d to put the container into background mode, the “–rm” parameter will not work. –device=

  1. --device=[]``Add a host device to the container


–disable-content-trust

  1. --disable-content-trust=true Skip image verification

Skip mirror authentication. –dns

  1. --dns=[]``Set custom DNS servers

Custom DNS.

  1. $docker run - it - DNS = 8.8.8.8 ` ` - rm ubuntu: 14.04 ` ` / bin/bash
  2. root@b7a6f0e63e65:/# cat /etc/resolv.conf
  3. Nameserver 8.8.8.8


–dns-opt

  1. --dns-opt=[]``Set DNS options


–dns-search

  1. --dns-search=[]``Set custom DNS search domains


-e, –env

  1. -e,``--env=[]``Set environment variables

This means environment variables. –entrypoint

  1. --entrypoint=``Overwrite the default ENTRYPOINT of the image

Literally means point of entry, and it functions exactly as intended. An ENTRYPOINT allows you to configure a container that will run as an executable. It makes your container function behave like an executable program. Example 1: Construct an image using ENTRYPOINT:

  1. ENTRYPOINT ["/bin/echo"]

So the container function of the docker built image is like a /bin/echo program: for example, I built the image named imageEcho, so I can use it like this:

  1. Docker run -it ImageEcho "This is a test"

The output is “this is a test,” and the container for the imageEcho image behaves like an echo program. The parameter you add “this is a test” will be added to ENTRYPOINT, so /bin/echo “this is a test”. Example 2:

  1. ENTRYPOINT ["/bin/cat"]

You can run the constructed image like this (let’s say it’s called ST) :

  1. docker run -it st /etc/fstab

This is equivalent to the /bin/cat /etc/fstab command. Run it and print the contents of /etc/fstab. –env-file

  1. --env-file=[]``Read``in a file of environment variables

Read the file that sets the environment variables

  1. --expose=[]``Expose a port or a range of ports

Tells the Docker server container the exposed port number for interconnecting systems.

  1. $ docker run -it --expose=22``--rm ubuntu:14.04``/bin/bash


–group-add

  1. --group-add=[]``Add additional groups to join


-h, –hostname

  1. -h,``--hostname=``Container host name

Set the container host name.

  1. $docker run -it --hostname=web --rm Ubuntu :14.04 ' '/bin/bash
  2. Once in the container
  3. root@web:/#


-i, –interactive=false

  1. -i,``--interactive=false Keep STDIN open even if not attached

Keep standard input, often used with -t to request a console for data interaction. –ipc

  1. --ipc= IPC namespace to use

The IPC(POSIX/SysV IPC) namespace provides named shared memory, semaphore variables, and message queues that are isolated from each other. Shared memory can speed up process data interaction. Shared memory is typically used for database and high performance applications (C/OpenMPI, C++/using Boost Libraries) or financial services. If you need to deploy these types of applications in containers, you should use shared memory directly across multiple containers. –kernel-memory

  1. --kernel-memory=``Kernel memory limit

Kernel memory will not be swapped to swap. In general, it is not recommended to change the docker. You can refer to the official docker document directly. -l, –label

  1. -l,``--label=[]``Set meta data on a container
  2. --label-file=[]``Read``in a line delimited file of labels


–link

  1. --link=[]``Add link to another container

Used to connect two containers. Example: Connecting two containers Start container 1: Web

  1. $ docker run --name web -d -p 22``-p 80``-it webserver:v1

Start container 2: AP1 to connect to the Web and name it Apache

  1. $ docker run --name ap1 --link=web:apache -d -p 22``-p 80``-it webserver:v1


–log-driver

  1. --log-driver=``Logging driver for container
  2. --log-opt=[]``Log driver options

Docker has added the rotate function of the JSON-file (default) log driver. We can configure the rotate function with the max-size and max-file -log-opt. For example, let’s start a nginx container with a json-file log engine, with a maximum limit of 1K per log file and a rotation of 5 logs:

  1. docker run -d --log-driver=json-file --log-opt max-size=1k``--log-opt max-file=5``--name webserver -p 9988:80 nginx

With rotate, we don’t have to worry about log inflation for one container and dragging other containers with the host to death. –mac-address

  1. --mac-address=``Container MAC address

(e.g. 92:d0:c6:0a:29:33) set the MAC address of the container. -m, –memory

  1. -m,``--memory=``Memory limit

Set the maximum memory limit used by the container. The default unit is byte. You can use characters with units, such as K, G, and M. By default, the container can use all the free memory on the host. To set the maximum memory size of the container, run the following command:

  1. Docker run-tid -- name mem1 -- memory 128m Ubuntu :14.04 ' '/bin/bash

By default, Docker allocates the same size of swap for the container in addition to the size specified by -memory, which means that the container created by the above command can actually use up to 256MB of memory instead of 128MB. If you need to customize the size of the swap partition, you can use the -memory-swap parameter to control the size. In the cgroups configuration file, the memory size of the container is 128MB (128 x 1024 x 1024=134217728B). The combined size of memory and swap is 256MB (256 x 1024 x 1024=268435456B).

  1. # cat/sys/fs/cgroup/memory/docker / < integrity of the container ID > / memory. Limit_in_bytes
  2. 134217728
  3. # cat/sys/fs/cgroup/memory/docker / < integrity of the container ID > / memory memsw. Limit_in_bytes
  4. 268435456

WARNING: Your kernel does not support swap limit capabilities, memory limited without swap. This is because cgroup is not enabled on the host to control swap partitions by default. You can modify GRUB boot parameters by referring to the docker official documentation. –memory-reservation

  1. --memory-reservation=``Memory soft limit

Elastic memory sharing is enabled to allow the container to use as much memory as possible when the host is fully resourced, and to force the container to reduce its memory to the size specified by the memory-reservation when competition or low memory is detected. According to the official statement, if this option is not set, some containers may occupy a large amount of memory for a long time, resulting in performance loss. –memory-swap

  1. --memory-swap=``Total memory (memory + swap),``'-1' to disable swap

Is the sum of the memory size and the size of the swap partition. If this parameter is set to -1, the size of the swap partition is unlimited. The default unit is byte. You can use characters with units, such as K, G, and M. If the value of – memory-swap is less than the value of – memory, the default value is twice the value of – memory-swap. –memory-swappiness

  1. --memory-swappiness=-1``Tuning container memory swappiness (0 to 100)

Controls the tendency of processes to swap physical memory into swap partitions. The default coefficient is 60. The smaller the coefficient, the more inclined you are to use physical memory. The value ranges from 0 to 100. If the value is 100, use swap partition as much as possible. When the value is 0, it disables the container swap function (unlike the host, where swappiness is set to 0 does not guarantee that swap will not be used). –name

  1. --name=``Assign a name to the container

Give the container a name.

  1. $docker run -it --name= Web Ubuntu :14.04 ' '/bin/bash


–net

  1. --net=default Set the Network``for the container

The following are common parameters for network Settings:

  • None Disables network connections in a Container: When network mode is set to None, the Container does not allow access to any external routers. The Container has only one loopback interface and no router that can access the external network.
  • Bridge connects to Contianer via veth. Default: Docker sets container to Bridge mode by default. In this case, there is a docker0 network interface on the host and a pair of VeTH interfaces are created for the container. One OF the VETH interfaces serves as a network adapter bridge on the host, and the other one exists in the namespace of the Container and points to the Loopback of the Container. Docker automatically assigns an IP to the container and Bridges the data inside the container to the outside world.
  • Host allows a container to use a host’s network stack: When network mode is set to host, the container shares the host’s network stack entirely. Host All network interfaces are fully open to Containers. The hostname of the Container also exists in the host hostname. In this case, all exposed ports of a Container and links to other Containers become invalid.
  • Container: When network mode is set to Container, this Container completely reuses the network stack of another Container. The container name must be in the following format: –net Container :. For example, you currently have a Redis Container bound to the local address localhost. If another Container wants to reuse the network stack, do the following:
  1. $docker run -d --name redis example/redis --bind 127.0.0.1
  2. # use the redis container's network stack to access localhost
  3. $sudo docker run --rm -ti --net container:redis example/redis-cli -h 127.0.0.1


–oom-kill-disable

  1. --oom-kill-disable=false Disable OOM Killer


-P, –publish-all

  1. -P,``--publish-all=false Publish all exposed ports to random ports

Map all ports externally. -p, –publish

  1. -p,``--publish=[]``Publish a container's port(s) to the host

If the mapped port is not specified, the mapped port is randomly specified.

  1. $docker run -- d -p 10022:22 "-p 10080:80" -it webServer :v1

Use Docker Run to start the container we created. -d enables the container to run in the background. Use multiple -p to map multiple ports. Map port 22 of the container to local 10022, and map 80 to 10080. –pid

  1. --pid= PID namespace to use

Set the PID mode of the container. Two:

  1. host: use the host's PID namespace inside the container.
  2. Note: the host mode gives the container full access to local PID and is therefore considered insecure.


–privileged

  1. --privileged=false Give extended privileges to this container

By default, containers cannot access any other devices. But through privileged, container has access to any other device. When the operator performs the Docker run — Privileged, the Docker will have access to all the host devices

  1. $docker run -it --rm -- Privileged Ubuntu :14.04 ' '/bin/bash


–read-only

  1. --read-only=false Mount the container's root filesystem as read only

When enabled, the file system of the container will be read-only.

  1. $docker run it --rm --read-only Ubuntu :14.04 ' '/bin/bash
  2. root@d793e24f0af1:/# touch a
  3. touch: cannot touch 'a':``Read-only file system

\

  • No, the default policy, does not restart the container when it exits
  • On-failure, restarts containers only when the container exits abnormally (exit status is non-zero)
  • On-failure :3, restart the container up to 3 times if the container exits abnormally
  • Always, always restart the container when the container exits. When the operating system or Docker service restarts, the container can always start with the system
  • Unless -stopped, the container is always restarted when the container exits, but does not consider containers that were stopped when the Docker daemon started

Example:

  1. $docker run -it --restart=always Ubuntu :14.04 ' '/bin/bash


–rm

  1. --rm=false Automatically remove the container when it exits

When a container exits, all information about that container is cleared. –security-opt

  1. --security-opt=[]``Security``Options

Security options. –sig-proxy

  1. --sig-proxy=true|false
  2. Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied.``The default is true.


–stop-signal

  1. --stop-signal=SIGTERM Signal to stop a container, SIGTERM by default


-t, –tty

  1. -t,``--tty=false Allocate a pseudo-TTY

Assign an analog terminal, usually used with -i. -u, –user

  1. -u,``--user=``Username or UID (format:``<name|uid>[:<group|gid>])
  2. Sets the username or UID used and optionally the groupname or GID for the specified command.
  3. The followings examples are all valid:
  4. --user [user | user:group | uid | uid:gid | user:gid | uid:group ]
  5. Without this argument the command will be run as root in the container.


–ulimit

  1. --ulimit=[]``Ulimit options
  2. --default-ulimit, the startup parameter of the Docker daemon, can specify the default container ulimit configuration. If this parameter is not specified, it inherits from the Docker daemon by default.
  3. --ulimit, a docker run parameter that overrides the default value of ulimit specified by the Docker daemon. If this parameter is not specified, default-ulimit is inherited by default.

\

  1. $docker run -d --ulimit nofile= 20480/40960 Ubuntu :14.04 ' '/bin/bash


-v, –volume

  1. -v,``--volume=[]``Bind mount a volume

You can add a volume to the container using the docker run command with the -v argument. Adding data volume /data1 automatically creates a directory

  1. $docker run -it --name web -v /data1 Ubuntu :14.04 ' '/bin/bash
  2. root@fac11d44de3e:/# df -h
  3. / dev/disk/by uuid / 1894172 f - 589 e8b b763-7126991 - b - 4 c7fbb ` ` 29 G ` ` 2.6 G ` ` 25 G ` ` 10% ` ` / data1
  4. root@fac11d44de3e:/# cd /data1

2. Add the host directory to the container. Load /data_web of the host as the container /data directory

  1. $docker run it --name web -v /data_web:/data Ubuntu :14.04 ' '/bin/bash


–volumes-from

  1. --volumes-from=[]``Mount volumes from the specified container(s)

Mount directories from other containers. 1. Create a DBData container containing /data data volumes

  1. $docker run -v /data --name dbdata Ubuntu :14.04 ' '/bin/bash

2. Create a data volume for WebServer1 to mount DBData to

  1. $docker run -it --volumes-from dbdata --name webServer1 Ubuntu :14.04 ' '/bin/bash


-w, –workdir

  1. -w,``--workdir=``Working directory inside the container

Sets the working directory for the container.

  1. $docker run -it --workdir="/data" Ubuntu :14.04 "/ bin/bash
  2. root@7868da4d2846:/data#

To be continued……