Introduction to the

Docker automatically builds the image by reading instructions in the Dockerfile file. The Dockerfile file is a text file that contains all the commands needed to build the image. Dockerfile files follow a specific format and instruction set. Docker images consist of read-only layers, each representing a Dockerfile instruction. The layers are stacked, with each layer an incremental change from the previous one. Example:

FROM Ubuntu :18.04 COPY. /app RUN make /app CMD python /app/appCopy the code

COPY Add the docker client’s current directory file to the image RUN Use make to build the application CMD to specify the commands to RUN in the container

When you start a container with an image, you add a writable layer to the base layer. All changes made to the running container (for example, writing new files, modifying existing files, and deleting files) are written to this writable container layer.

advice

1. Create a temporary container

Images defined by the Dockerfile file produce containers that are as temporary as possible. Temporary means that containers can be stopped, destroyed, rebuilt and replaced with minimal Settings and configurations.

2. Understand the build context

When we build the image, in general, we will use

docker build -t name:tag .
Copy the code

When you issue a Docker build command, the current directory is used as the build context. Dockerfiles are stored in this directory by default. However, you can specify a different directory with the -f option. No matter where the Dockerfile file is actually stored. The recursive contents of all files and directories in the current directory are sent to the Docker daemon as a build context.

Example: Create a directory for building the context and CD into it. Write the “Hello” string to a text file, name it Hello and create a Dockerfile. Run cat. The build context builds the image.

mkdir myproject && cd myproject
echo "hello" > hello
echo -e "FROM busybox\nCOPY /hello /\nRUN cat /hello" > Dockerfile
docker build -t helloapp:v1 .
Copy the code

Move Dockerfile and Hello to different directories and build a second version image (independent of the cache built last time). Use -f to specify the directory for the Dockerfile file and specify the context directory for the build.

mkdir -p dockerfiles context
mv Dockerfile dockerfiles && mv hello context
docker build --no-cache -t helloapp:v2 -f dockerfiles/Dockerfile context
Copy the code

Inadvertently including unnecessary files when building an image will result in a bloated build context and a bloated image, which will result in increased build times, commit times to and pull times from the repository, and increased container run-time size. You can see the size of the build context when you build the image.

Excluded by.dockerignore

To exclude build-irrelevant files (without refactoring the source repository), use the.dockerignore file. This file supports exclusion mode similar to the.gitignore file

Using multi-level builds

Multi-level builds allow you to dramatically reduce the size of the final image without trying to reduce the number of intermediate layers and files. Because the image is built at the end of the build process, the image layer can be minimized by leveraging the build cache. For example, if your build contains multiple layers, you can sort them in order from less frequent changes (to ensure the build cache is reusable) to more frequent changes:

  • Install tools required by build program
  • Install update library dependencies
  • Production application

Example: A go application Dockerfile looks like this:

FROM golang:1.11-alpine AS build

# Install tools required for project
# Run `docker build --no-cache .` to update dependencies
RUN apk add --no-cache git
RUN go get github.com/golang/dep/cmd/dep

# List project dependencies with Gopkg.toml and Gopkg.lock
# These layers are only re-built when Gopkg files are updated
COPY Gopkg.lock Gopkg.toml /go/src/project/
WORKDIR /go/src/project/
# Install library dependencies
RUN dep ensure -vendor-only

# Copy the entire project and build it
# This layer is rebuilt when a file changes in the project directory
COPY . /go/src/project/
RUN go build -o /bin/project

# This results in a single layer image
FROM scratch
COPY --from=build /bin/project /bin/project
ENTRYPOINT ["/bin/project"]
CMD ["--help"]
Copy the code

Don’t install unnecessary packages

To reduce complexity, dependencies, file sizes, and build times, avoid installing additional or unnecessary packages as they may be “fine”. For example, you do not need to include a text editor in your database image.

Decouple the application

There should be only one concern per container. Decoupling applications to multiple containers makes it easier to scale horizontally and reuse containers. For example, a Web application stack might consist of three separate containers, each with its own unique image, managing the Web application, database, and memory cache in a decoupled manner.

Limiting each container to one process is a good rule of thumb, but it’s not a hard and fast rule. For example, not only can the init process be used to generate containers, but some programs may generate other processes themselves. For example, Celery can generate multiple work processes and Apache can create one process per request.

Use your best judgment to keep containers as clean and modular as possible. If containers depend on each other, you can use the Docker container network to ensure that those containers can communicate.

Reduce the number of layers

In older versions of Docker, it was important to minimize the number of layers in the image to ensure that they were high-performance. The following features have been added to reduce this limitation:

  • Only the RUN,COPY, and ADD directives create layers, while the other directives create temporary intermediate layers and do not increase the build size.
  • Where possible, use multi-stage builds and copy only the artifacts needed into the final image. This allows you to include tools and debugging information during the intermediate build phase without increasing the size of the final image

Multi-row parameter sort

Whenever possible, mitigate future changes by alphabetizing multiple lines of arguments. This helps avoid duplicate packages and makes the list easier to update. This also makes PR easier to read and review. Adding a space before the backslash (\) also helps.

Example:

RUN apt-get update && apt-get install -y \
  bzr \
  cvs \
  git \
  mercurial \
  subversion
Copy the code

Cache utilization

When the image is built, the Docker increments the instructions in the Dockerfile, executing each instruction in the specified order. As each instruction is examined, Docker looks for existing images in its cache that can be reused, rather than creating new (duplicate) images.

If you don’t want to use caching, you can use the –no-cache=true option in the Docker build command. However, if you want docker to use caching, it is important to understand when to use and when not to use a matching image. The basic rules docker follows are outlined below:

  • Starting with the parent image already in the cache, subsequent instructions compare all child images to see if any of them were built using the same instruction. If not, the cache is invalid.
  • In most cases, it is enough to compare the instructions in a Dockerfile with one of the child images. However, some instructions require more scrutiny and interpretation.
  • For the ADD and COPY directives, the contents of the files in the image are examined and a checksum is calculated for each file. The last modification time and last access time of the file are not considered in these checksums. During cache lookups, checksums are compared to checksums in existing images. If anything in the file, such as content and metadata, changes in any way, the cache is invalidated.
  • With the exception of the ADD and COPY commands, cache checking does not look at the files in the container to determine cache matches. For example, when the RUN apt-get -y update command is processed, the updated files in the container are not checked for cache hits. In this case, you simply use the command string itself to find the match.

Once the cache is invalid, the Dockerfile command generates a new image without using the cache.

Dockerfile instruction

FROM

FROM <image> [AS <name>]

or

FROM <image>[:<tag>] [AS <name>]

or

FROM <image>[@<digest>] [AS <name>]

  • The FROM directive initializes the new build phase and sets up the base image for subsequent directives. Therefore, a valid Dockerfile must begin with the FROM directive.
  • FROM can appear multiple times in the same Dockerfile file. To create multiple images, or to use one as a dependency on another, just record the ID of one image before each new FROM execution. Each FROM directive clears any state created by the previous command.
  • Optionally, each FROM instruction can passAS nameProvide a noun, the noun is in the child FROM instruction and COPY – FROM < name | index > instruction middle finger to the mirror.
  • Tag and digest values are optional; if they are omitted, the latest version is used, and an exception is thrown if they are not found.

Use the official image as the base image whenever possible.

Example:

FROM golang:1.10.3 as builder
WORKDIR /app/
RUN mkdir -p src/github.com \ 
    && mkdir -p src/golang.org \
    && mkdir -p src/gopkg.in \
    && mkdir -p src/qiniupkg.com \
    && mkdir -p src/google.golang.org \
    && mkdir -p src/go4.org
Copy the code

See how ARG and FROM interact: The FROM directive supports declaring variables with any ARG directive before the first FROM example:

ARG  CODE_VERSION=latest
FROM base:${CODE_VERSION}
CMD  /code/run-app

FROM extras:${CODE_VERSION}
CMD  /code/run-extras
Copy the code

LABEL

You can add labels to your images, organize them, record certificates, or other reasons, add labels to each label, add lines that start with label, and add one or more key-value pairs. The following example shows a different form:

# Set one or more individual labels LABEL com.example.version=" 0.1-beta" LABEL vendor1="ACME Incorporated" LABEL vendor2=ZENITH\ Incorporated LABEL com.example.release-date="2015-02-12" LABEL com.example.version.is-production=""Copy the code

The above can also be written as follows:

# Set multiple labels at once, using line-continuation characters to break long lines LABEL vendor=ACME\ Incorporated \ com.example.is-beta= \ Com.example. is-production="" \ com.example.version="0.0.1-beta" \ com.example.release-date="2015-02-12"Copy the code

RUN

  • RUN
  • RUN [“executable”, “param1”, “param2”]

The RUN directive will execute any command on a new layer of the current image and submit the result. The submitted image will be used in the next step.

Break up long or complex RUN statements on multiple lines separated by backslashes to make dockerfiles more readable, understandable, and maintainable.

RUN apt-get update && apt-get install -y RUN apt-get update && apt-get install -y

Example:

RUN apt-get update && apt-get install -y \ aufs-tools \ automake \ build-essential \ curl \ dpkg-sig \ libcap-dev \ Libsqlite3 -dev \ mercurial \ reprepro \ ruby1.9.1 \ ruby1.9.1-dev \ s3cmd=1.1.* \ && rm -rf /var/lib/apt/lists/*Copy the code

The above directive is recommended, and the final execution is used to clear the cache during installation.

CMD

CMD ["executable","param1","param2"]

CMD ["param1","param2"]

CMD command param1 param2

There can only be one CMD directive in a Dockerfile. If there are more than one CMD directive, only the last one takes effect.

The main purpose of CMD is to provide default values for the execution container. These defaults can include executables,

You can also omit the executable, in which case you must also specify the ENTRYPOINT directive.

If CMD is used to provide default parameters for the ENTRYPOINT directive, the CMD and ENTRYPOINT directives should be specified using JSON array format. The exec form is parsed as a JSON array, which means you must use double quotes (“) around words instead of single quotes (‘).

CMD directives should be used to run the software included by the image, using the supplied parameters. CMD directives should use CMD [“executable”, “params1”,”params2″… . Therefore, if the image is a service, such as Apache, run CMD [“apache2”, “-dforeground “]. In fact, this form of instruction is recommended for service-based mirroring.

In most cases, CMD should provide an interactive shell. For example bash. Python. Perl. Example: CMD [” perl “, “- de0″], CMD (” python “), or CMD (” PHP “” -a”). CMD [“param”, “param”] is rarely used when combined with ENTRYPOINT. Unless both you and your client are familiar with how ENTRYPOINT works.

Example:

FROM busybox as final
COPY --from=builder /app/src /opt/app/src
EXPOSE 8080
WORKDIR  /opt/app/
CMD ["./server"]
Copy the code

EXPOSE

EXPOSE <port> [<port>/<protocol>...]

The EXPOSE directive tells the Docker container to listen on a specified network port at run time. You can specify whether the port listens on TCP or UDP. If no protocol is specified, the default is TCP. The EXPOSE directive doesn’t actually publish ports. It acts as a kind of document between the person building the image and the person running the container about which ports to publish. To actually publish ports while the container is running, publish and map one or more ports on Docker Run with the -p flag, or publish all exposed ports with the -p flag and map them to higher-order ports. By default, EXPOSE assumes TCP. You can also specify UDP:

EXPOSE 80/udp

To expose over both TCP and UDP, you need to contain two lines:

EXPOSE 80/tcp
EXPOSE 80/udp
Copy the code

In this case, if -p is used with Docker run, the port is exposed once for TCP and once for UDP. Remember that -p uses ephemeral higher-order host ports on the host, so TCP and UDP have different ports.

Regardless of EXPOSE Settings, you can override them at run time with the -p flag. For example, docker run -p 80:80/ TCP -p 80:80/udp…

To set up port redirection on the host system, see Using the -p flag. The Docker network command enables the creation of a network for communication between containers without exposing or publishing specific ports, because containers connected to the network can communicate with each other through any port.

The EXPOSE directive tells the container to listen for connected ports. Therefore, you should use a common legacy port for your application. For example, an image containing Apache Web server will use EXPOSE 80, an image containing MongoDB will use EXPOSE 27017, and so on.

For external access, your users can perform a Docker run using flags that indicate how to map a specified port to the port of their choice. For container links, Docker provides environment variables for the path back to the source from the recipient container.

Example:

FROM busybox as final
COPY --from=builder /app/src /opt/app/src
EXPOSE 8080
WORKDIR  /opt/app/
CMD ["./server"]
Copy the code

So we can see it when docker photoshop

44AF1F00F971 Server_number1 :1.0 "./server" 3 hours ago Up 14 seconds 8080/ TCP server_number1_1 b372192EF894 Server_number2 :1.0 "./server" 3 hours ago Up 15 seconds 8080/ TCP server_number2_1 9FFa14236AB1 nginx:latest "/ docker - entrypoint...." 3 hours ago Up 21 seconds 0.0.0.0:80->80/ TCP server_nginx_1 7b73C43a97d3 Redis :3.2 "docker-entrypoint.s... 3 hours ago Up 21 seconds 0.0.0.0:6379->6379/ TCP server_redis_1 8DF25DB59837 mysql:5.7 "docker-entrypoint.s..." 4 hours ago Up 21 seconds 33060/ TCP, 0.0.0.0:33306->3306/ TCP server_mysql_1Copy the code

ENV

ENV <key> <value>
ENV <key>=<value> ...
Copy the code

The ENV directive sets the environment variable to a value. This value will be in the context of all subsequent instructions during the construction phase, and can also be replaced inline in many cases.

The ENV directive has two forms. The first form, ENV, sets a single variable to a value. The entire string after the first space is treated as – including the space character. This value is interpreted for other environment variables, so if it is not escaped, the quote character is removed.

The second form ENV =… Multiple variables are allowed to be set at once. Note that the second form uses the equal sign (=) in syntax, whereas the first does not. As with command line parsing, quotes and backslashes can be used to contain Spaces within values.

Example:

FROM golang:1.10.3 as builder RUN yum install -y GCC \ && yum install -y GCC -C ++ kernel-devel make ENV GOPATH /go ENV PATH $PATH:$GOPATH/binCopy the code

To make new software easier to run, you can use ENV to update the PATH environment variable for container-installed software. For example: ENV PATH /usr/local/nginx/bin:$PATH Ensure that CMD [“nginx”] is running properly.

The ENV directive is also useful for providing necessary environment variables specific to the service you want to accommodate, such as PGDATA for Postgres. Finally, ENV can also be used to set common version numbers to make it easier to maintain version iterations, as shown in the following example:

ENV PG_MAJOR 9.3 ENV PG_VERSION 9.3.4 RUN curl - SL http://example.com/postgres-$PG_VERSION.tar.xz | tar - xJC The/usr/SRC/postgress &&... ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATHCopy the code

Similar to using constant variables in your program (as opposed to hard-coded values), this approach allows you to change a single ENV directive to automatically magically change the software version in the container.

Each ENV line creates a new middle layer, just like the RUN command. This means that even if you unset the environment variable in a future layer, it will remain in this layer and can dump its value. You can test it by creating a Dockerfile like the one shown below, and then build it.

FROM alpine
ENV ADMIN_USER="mark"
RUN echo $ADMIN_USER > ./mark
RUN unset ADMIN_USER
Copy the code

$docker run –rm test sh -c ‘echo $ADMIN_USER

To prevent this, and actually unset the environment variables, use the RUN command with the shell command to set, use and unset the variable all on a single layer. You can combine commands with; Either &&. If you use the second method and one of the commands fails, the Docker build will fail as well. This is usually a good idea. Use \ as a line continuation for Linux Dockerfiles to improve readability. You can also put all commands into a shell script and RUN the shell script using the RUN command.

FROM alpine
RUN export ADMIN_USER="mark" \
    && echo $ADMIN_USER > ./mark \
    && unset ADMIN_USER
CMD sh
Copy the code

Docker run –rm test sh -c ‘echo $ADMIN_USER

ADD or COPY

ADD

  • ADD [--chown=<user>:<group>] <src>... <dest>
  • ADD [--chown=<user>:<group>] ["<src>",... "<dest>"]

The chown function only works with Dockerfiles to build Linux containers. Invalid on Windows containers.

The ADD command copies new file, directory, or remote file urls from and adds them to the path’s mirrored file system. Each may contain wildcards, and the Match will be done using Go’s Filepath.Match rule. Such as:

Copy the code

Is the absolute path, or relative to WORKDIR, to which the source will be copied in the target container.

ADD test relativeDir/          # adds "test" to `WORKDIR`/relativeDir/
ADD test /absoluteDir/         # adds "test" to /absoluteDir/
Copy the code

When you add files or directories that contain special characters such as [and], you need to escape these paths according to the Golang rule to prevent them from being treated as matching patterns. For example, to add a file named arr [0].txt, use the following command:

ADD arr[[]0].txt /mydir/    # copy a file named "arr[0].txt" to /mydir/
Copy the code

All new files and directories are created with UID and GID 0, unless the optional –chown flag specifies a given user name, group name, or UID/GID combination to request specific ownership of added content. The chown flag is formatted to allow user and group names as strings or any combination of straight integer UIds and Gids. Providing a user name without a group name or a UID without a GID will use the same numeric UID as the GID. If a user name or group name is provided, the container’s root file system /etc/passwd and /etc/group files will be used to perform the conversion from name to integer UID or GID, respectively. The following example shows a valid definition of the –chown flag:

ADD --chown=55:mygroup files* /somedir/
ADD --chown=bin files* /somedir/
ADD --chown=1 files* /somedir/
ADD --chown=10:11 files* /somedir/
Copy the code

If the container root file system does not contain /etc/passwd or /etc/group files and uses a user name or group name in the –chown flag, the build will fail on the ADD operation. Using numeric ids does not require lookups and does not depend on container root file system contents.

In the case of a remote file URL, the target will have 600 permissions. If the remote file being retrieved has an HTTP Last-Modified header, the timestamp of that header will be used to set the Mtime file on the target. However, as with any other file processed during ADD, mTime will not be included in determining whether the file has changed and should be updated in the cache.

ADD follows the following rules:

  • The path must be in the context of the build; You can’t add.. /something /something, because the first step in a Docker build is to send the context directory (and subdirectories) to the Docker daemon.
  • If it is a URL and does not end with a trailing slash, download the file from the URL and copy it to.
  • If it is a URL and ends with a trailing slash, the file name is inferred from the URL and downloaded to<dest> /<filename>. For example,ADD http://example.com/foobar /Will create the file/foobar. The URL must have a very important path so that the appropriate file name can be found in this case (Example.com will not work)
  • If it is a directory, the entire contents of the directory, including file system metadata, are copied. Do not copy the directory itself, only its contents.
  • If it is a local tar archive in a recognizable compressed format (Identity, Gzip, bzip2, or XZ), it is unzipped to a directory. Resources in remote urls are not decompressed. When copying or decompressing a directory, it behaves the same as tar-x, resulting in:
1. Whatever exists on the destination path, 2. The contents of the source tree resolve conflicts in favor of 2. On a file-by-file basis.Copy the code
  • If it is any other type of file, copy it separately with the metadata. In this case, if it ends with a trailing slash /, it is treated as a directory, and the contents are written to/base ().
  • If more than one resource is specified directly or by using a wildcard, it must be a directory and must end with a slash /.
  • If it does not end with a trailing slash, it is treated as a regular file and the contents are written.
  • If not, all missing directories are created in their path.

COPY

  • COPY [--chown=<user>:<group>] <src>... <dest>
  • COPY [--chown=<user>:<group>] ["<src>",... "<dest>"]

The chown function only works with Dockerfiles to build Linux containers. Invalid on Windows containers.

The COPY instruction copies new files or directories from and adds them to the file system of the path’s container.

Multiple resources can be specified, but file and directory paths will be interpreted as sources relative to the build context.

Each may contain wildcards, and the Match will be done using Go’s Filepath.Match rule. Such as:

COPY hom* /mydir/ # adds all files starting with "hom" COPY hom? .txt /mydir/ # ? is replaced with any single character, e.g., "home.txt"Copy the code

Is the absolute path, or relative to WORKDIR, to which the source will be copied in the target container.

COPY test relativeDir/   # adds "test" to `WORKDIR`/relativeDir/
COPY test /absoluteDir/  # adds "test" to /absoluteDir/
Copy the code

When copying files or directories that contain special characters such as [and], you need to escape these paths according to the Golang rule to prevent them from being treated as matching patterns. For example, to copy a file named arr [0].txt, use the following command:

COPY arr[[]0].txt /mydir/    # copy a file named "arr[0].txt" to /mydir/
Copy the code

Unless the optional –chown flag specifies a given user name, group name, or UID/GID combination to request specific ownership of the copied content, all new files and directories are created with UID and GID 0. The chown flag is formatted to allow user and group names as strings or any combination of straight integer UIds and Gids. Providing a user name without a group name or a UID without a GID will use the same numeric UID as the GID. If a user name or group name is provided, the container’s root file system /etc/passwd and /etc/group files will be used to perform the conversion from name to integer UID or GID, respectively. The following example shows a valid definition of the –chown flag:

COPY --chown=55:mygroup files* /somedir/
COPY --chown=bin files* /somedir/
COPY --chown=1 files* /somedir/
COPY --chown=10:11 files* /somedir/
Copy the code

If the container root file system does not contain /etc/passwd or /etc/group files and uses a user name or group name in the –chown flag, the build will fail on COPY. Using numeric ids does not require lookups and does not depend on container root file system contents.

Optionally, a sign COPY to accept – the from = < name | index >, can be used to set the source location to previous build phase (using the from.. AS created), rather than by the sent build context user. The flag also accepts the numeric index assigned for all previous build phases started with the FROM directive. If no build phase is found with the specified name, try using an image with the same name.

COPY follows the following rules:

  • The path must be in the context of the build; You can’t COPY.. /something /something, because the first step in a Docker build is to send the context directory (and subdirectories) to the Docker daemon.
  • If it is a directory, the entire contents of the directory, including file system metadata, are copied. Do not copy the directory itself, only its contents.
  • If it is any other type of file, copy it separately with the metadata. In this case, if it ends with a trailing slash /, it is treated as a directory, and the contents are written to/base ().
  • If more than one resource is specified directly or by using a wildcard, it must be a directory and must end with a slash /.
  • If it does not end with a trailing slash, it is treated as a regular file and the contents are written.
  • If not, all missing directories are created in their path.

Although ADD and COPY are similar in function, in general, COPY is preferred. That’s because it’s more transparent than ADD. COPY only supports basic copying of local files into containers, while ADD has some features (such as local-only tar extraction and remote URL support) that are not obvious. Therefore, the best use of ADD is to automatically extract the local tar file into the image, as shown in ADD rootfs.tar.xz /.

If there are multiple steps in a Dockerfile that use different files, execute the COPY instruction individually for each file, rather than all together. This ensures that the cache built for each step is used by the current step if the specific file changes. Example:

COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
COPY . /tmp/
Copy the code

The RUN step has fewer cache invalidation times than when it was placed before COPY. Since image size is important, it is strongly recommended not to use ADD to get packages from remote urls. You should use curl or wget instead. This way, you can delete files that are no longer needed after extraction, and you don’t have to add additional layers to the image. For example, you should avoid doing the following:

ADD http://example.com/big.tar.xz /usr/src/things/
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
RUN make -C /usr/src/things all
Copy the code

Alternative is:

RUN mkdir -p /usr/src/things \
    && curl -SL http://example.com/big.tar.xz \
    | tar -xJC /usr/src/things \
    && make -C /usr/src/things all
Copy the code

COPY should always be used for other projects (files, directories) that do not require the tar auto-extraction feature of ADD.

ENTRYPOINT

  • ENTRYPOINT ["executable", "param1", "param2"] (exec form, preferred)
  • ENTRYPOINT command param1 param2

ENTRYPOINT allows you to configure containers that will run as executables. For example, the following will start nginx using its default contents, listening on port 80: docker run -i -t –rm -p 80:80 nginx

Command line arguments in the exec form ENTRYPOINT are appended to all elements of the Docker run . And overrides the arguments specified by the CMD directive. This allows parameters to be passed to the entry point. Docker run -d passes the -d argument to the entry point. You can override the EntryPoint directive with the Docker Run — EntryPoint flag.

Shell forms prevent the use of any CMD or run command-line arguments, but the disadvantage is that ENTRYPOINT will start as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable is not the container’s PID 1 – and does not receive a Unix signal – so your executable will not receive SIGTERM from docker stop

.

Only the last ENTRYPOINT directive in the Dockerfile file is active.

The sample

You can use the exec form ENTRYPOINT to set relatively stable default commands and parameters, and use any form of CMD to set additional defaults that may change.

FROM ubuntu
ENTRYPOINT ["top", "-b"]
CMD ["-c"]
Copy the code

Docker run-it –rm –name test top-h docker run-it –rm –name test top-h docker run-it –rm –name test top-h

For future test results, you can use Docker Exec

$docker exec -it test ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 2.6 0.1 19752 2352? Ss+ 08:24 0:00 top-b-h root 7 0.0 0.1 15572 2164? R+ 08:25 0:00 ps auxCopy the code

And you can gracefully request to stop top using the Docker Stop test.

The following example shows using ENTRYPOINT to run Apache in the front end

FROM debian:stable
RUN apt-get update && apt-get install -y --force-yes apache2
EXPOSE 80 443
VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2"]
ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Copy the code

Example 2

You can specify a string for ENTRYPOINT that will be executed in /bin/sh-c, which uses shell handling to replace environment variables and ignores CMD or docker run command-line arguments. To execute a Docker stop correctly and effectively, remember to start with exec

FROM ubuntu
ENTRYPOINT exec top -b
Copy the code

When running the image, you can see the PID 1 process

$ docker run -it --rm --name test top Mem: 1704520K used, 352148K free, 0K shrd, 0K buff, 140368121167873K cached CPU: 5% usr 0% sys 0% nic 94% idle 0% io 0% irq 0% sirq Load average: 0.08 0.03 0.05 2/98 6 PID PPID USER STAT VSZ %VSZ %CPU COMMAND 1 0 root R 3164 0% 0% top-bCopy the code

When the docker stop command is executed, you can exit normally

$/usr/bin/time docker stop test test real 0m 0.20s user 0m 0.02s sys 0m 0.04sCopy the code

If you forget to start with exec, here it is:

FROM ubuntu
ENTRYPOINT top -b
CMD --ignored-param1

Copy the code

Run;

$ docker run -it --name test top --ignored-param2 Mem: 1704184K used, 352484K free, 0K shrd, 0K buff, 140621524238337K cached CPU: 9% usr 2% sys 0% nic 88% idle 0% io 0% irq 0% sirq Load average: 0.01 0.02 0.05 2/101 7 PID PPID USER STAT VSZ %VSZ %CPU COMMAND 10 root S 3168 0% 0% /bin/sh -c top -b CMD cmd2 7 1 root R 3164 0% 0% top -bCopy the code

You can see that the top output indicates that the ENTRYPOINT directive is not running as a process for PID1. When docker stop is run, the container will not exit normally. The stop command will force a SIGKILL signal after a few minutes.

$ docker exec -it test ps aux
PID   USER     COMMAND
    1 root     /bin/sh -c top -b cmd cmd2
    7 root     top -b
    8 root     ps aux
$ /usr/bin/time docker stop test
test
real	0m 10.19s
user	0m 0.04s
sys	0m 0.03s
Copy the code

Understand how CMD and ENTRYPOINT directives are integrated

Both directives define what commands to run when running a container. The following rules apply when using the contract:

  • Dockerfile should specify at least one CMD or ENTRYPOINT command.
  • Define ENTRYPOINT when using containers as executables.
  • CMD should be used as a default parameter for the ENTRYPOINT command or as a method for executing ad-hoc commands in a container.
  • CMD is overridden when the container is run with an alternate parameter.

The following table shows the commands executed for different ENTRYPOINT/CMD combinations:

null No ENTRYPOINT ENTRYPOINT exec_entry p1_entry ENTRYPOINT [” exec_entry “, “p1_entry”]
No CMD error, not allowed /bin/sh -c exec_entry p1_entry exec_entry p1_entry
CMD [” exec_cmd “, “p1_cmd”] exec_cmd p1_cmd /bin/sh -c exec_entry p1_entry exec_entry p1_entry exec_cmd p1_cmd
CMD [” p1_cmd “, “p2_cmd”] p1_cmd p2_cmd /bin/sh -c exec_entry p1_entry exec_entry p1_entry p1_cmd p2_cmd
CMD exec_cmd p1_cmd /bin/sh -c exec_cmd p1_cmd /bin/sh -c exec_entry p1_entry exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd

If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this case, CMD needs to specify a value when the image is defined.

The best use of ENTRYPOINT is the main command used to set up the image. Allow the image to run like this command (then use CMD as the default flag).

Let’s start with an example of a mirror of the command line tool s3cmd:

ENTRYPOINT ["s3cmd"]
CMD ["--help"]
Copy the code

You can now use the following command run to display the command line help: docker run s3cmd

Or run the docker run s3cmd ls s3://mybucket command with the correct parameters

VOLUME

VOLUME ["/data"]

The VOLUME command creates a mount point with the specified name that holds volumes from an externally mounted local host or other container. The value can be a JSON array, VOLUMN [“/var/log”], or a string, such as VOLUMN /var/log or VOLUMN /var/log/var/db.

The docker run command initializes the newly created volume with any data that exists at the location specified in the base image. For example, consider the following Dockerfile fragment:

FROM ubuntu
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
Copy the code

This Dockerfile generates an image that causes Docker Run to create a new mount point on /myvol and copies the greeting file to the newly created volume.

Notes on specifying volumes: Pay special attention to the following points about volumes in the Dockerfile file

  • Windows container-based volumes: When Windows containers are used, the volumes in the containers must be one of the following:
1. A drive other than 2.c does not exist or is emptyCopy the code
  • Change volumes from Dockerfile: If any build step changes the data in the volumes after declaration, those changes are discarded.
  • JSON format: The list is parsed as a JSON array. You must enclose double quotation marks (“) instead of single quotation marks (‘).
  • The host directory declares at container runtime that the host directory (mountpoint) is host-dependent in nature. This is to keep the image portable, since there is no guarantee that a given host directory will be available on all hosts. Therefore, you cannot install the host directory from Dockerfile. The VOLUME command does not support the host-dir parameter. You must specify the install point when you create or run the container.

The VOLUME directive applies to exposing any database storage areas, configuration stores, or files/folders created by the Docker container. It is strongly recommended that you use VOLUME for any mutable and/or user-maintainable portion of an image.

USER

USER <user>[:<group>] or
USER <UID>[:<GID>]
Copy the code

The USER directive is used to set a USER name or UID, optional USER group or GID. When running the image, for use when running the image and any RUN, CMD, and ENTRYPOINT directives that follow it in the Dockerfile.

On Windows, you must first create a user if the user is not a built-in account. This can be done using the Net User command called as part of the Dockerfile.

    FROM microsoft/windowsservercore
    # Create Windows user in the container
    RUN net user /add patrick
    # Set it for subsequent commands
    USER patrick
Copy the code

WORKDIR

WORKDIR /path/to/workdir

The WORKDIR directive sets the working directory for any of the RUN, CMD, ENTRYPOINT, COPY, and ADD directives in Dockerfile. If the WORKDIR directory does not exist, it will be created even if it is not used in any subsequent Dockerfile directives. The WORKDIR directive can be used multiple times in a Dockerfile. If a relative path is provided, it will be relative to the path of the previous WORKDIR directive. Such as:

WORKDIR /a
WORKDIR b
WORKDIR c
RUN pwd
Copy the code

The final output of the PWD command in this Dockerfile is /a/b/c. The WORKDIR directive resolves environment variables previously set with ENV. You can only use environment variables that are explicitly set in Dockerfile. Such as:

ENV DIRPATH /path
WORKDIR $DIRPATH/$DIRNAME
RUN pwd
Copy the code

The output of the last PWD command in this Dockerfile will be /path/$DIRNAME For clarity and reliability, you should always use the absolute path of WORKDIR. Also, you should use WORKDIR instead of things like RUN CD… && do-something, these instructions are difficult to read, troubleshoot and maintain.

ONBUILD

Execute the ONBUILD command after the current Dockerfile is built. ONBUILD is executed in any child image derived from the current image. Treat the ONBUILD command as an instruction provided by the parent Dockerfile to the child Dockerfile.

Docker builds the ONBUILD command before any command in the child Dockerfile.

ONBUILD is useful for images that will be built from a given image. For example, you can use ONBUILD as a language stack image to build arbitrary user software written in that language in a Dockerfile, as shown in the ONBUILD variant of Ruby.

Images built from ONBUILD should get a separate tag, for example: Ruby: 1.9-onbuild or Ruby: 2.0-onBuild.

Be careful when putting An ADD or COPY into an ONBUILD. If the newly built context is missing the resources being added, the “onBuild” image will suffer a catastrophic failure. As mentioned above, adding a separate tag helps alleviate this by allowing the Dockerfile author to make a choice.