takeaway

Kubernetes starts with containerization, and containers need to start with Dockerfile. This article will show you how to write an elegant Dockerfile.

The main contents of the article include:

  • Docker container

  • Dockerfile

  • Use multilevel builds

Thanks to the company for providing a large number of machine resources and time so that we can practice, thanks to the continuous practice of this topic in part of the project and personnel support.

Docker container

1.1 Characteristics of containers

We all know that a container is a standard software unit that has the following characteristics:

  • Run anywhere: Containers can package code with configuration files and associated dependent libraries to ensure that it runs consistently in any environment.

  • High resource utilization: The container provides process-level isolation so that CPU and memory utilization can be fine-tuned to better utilize the computing resources of the server.

  • Fast scaling: Each container can be run as a separate process and can share the system resources of the underlying operating system, making it faster to start and stop containers.

1.2 the Docker container

At present, the mainstream container engines on the market include Docker, Rocket/ RKT, OpenVZ/Odin and so on, and the container engine that dominates the market is the most used Docker container engine.

The Docker container is a series of processes isolated from the rest of the system. All the files needed to run these processes are provided by another image. From development to testing to production, the Linux container has portability and consistency. Containers run much faster than development channels that rely on repeating traditional test environments and can be deployed on multiple mainstream cloud platforms (PaaS) and on-site systems. Docker container is a good solution to the “development environment can run normally, as soon as the launch of all kinds of collapse” embarrassment.

Docker container features:

  • Lightweight: Containers are process-level resource isolation, while VMS are operating system-level resource isolation. Therefore, Docker containers can save more resource overhead than virtual machines, because Docker containers no longer need the GuestOS operating system.

  • Fast: Containers can be started and created in seconds or even milliseconds without the need to start GuestOS.

  • Portability: Docker container technology is to transform the application and the dependent library and runtime environment technology into container images, which can be run on different platforms.

  • Automation: Container choreography in the container ecosystem (e.g. Kubernetes) helps automate the management of containers.

Dockerfile

Dockerfile is a text document used to describe the composition of the file, which contains all the commands that the user can call in the use of the line to combine the Image. The user can also use the Docker build to implement the automatic build of multiple commands in a row.

By writing Dockerfile magnetic image, we can provide a basically consistent environment for the development and test team, so as to improve the efficiency of the development and test team, no longer worry about the environment is not unified, and operation and maintenance can also more conveniently manage our image.

Dockerfile syntax is very simple, commonly used only 11:

2.1 Write elegant Dockerfile

Writing an elegant Dockerfile requires attention to the following points:

  • The Dockerfile file should not be too long. The more layers you have, the bigger the image will be.

  • Build images that do not contain unnecessary content, such as logs, temporary installation files, etc.

  • Try to use the runtime base image instead of putting the build-time process into the runtime Dockerfile.

Just remember the above three points to write a good Dockerfile.

To help you understand, let’s use two Dockerfile instances for a simple comparison:

The FROM ubuntu: 16.04RUN apt-get updateRUN apt-get install -y apt-utils libjpeg-dev \     python-pipRUN pip install --upgrade pipRUN easy_install -U setuptoolsRUN apt-get cleanCopy the code
The FROM ubuntu: 16.04RUN apt-get update && apt-get install -y apt-utils \  libjpeg-dev python-pip \           && pip install --upgrade pip \      && easy_install -U setuptools \    && apt-get cleanCopy the code

Let’s look at the first Dockerfile, which at first glance looks well organized and well structured. The second Dockerfile is compact and not easy to read.

  • The benefit of the first Dockerfile is that when a layer of the process being executed fails, it is corrected and built again, and the layer that has already been executed will not be executed again. This can greatly reduce the time required for the next Build, but the problem with this is that the image takes up more space due to the increased level.

  • Second Dockerfile all the all components in a layer to solve, to do so to a certain extent, reduce the footprint image, but at the time of production base image if one group of compilation errors, revised the Build is equivalent to start all over again, in front of the compiled component in a layer, have to recompile all over again, more time consuming.

The following table shows the size of the images compiled by the two dockerfiles:

$ docker images | grep ubuntu      REPOSITORY      TAG     IMAGE ID    CREATED     SIZE                                                                                                                                   Ubuntu 16.04 9361ce633ff1 1 days ago 422MBUbuntu 16.04-1 3f5b979df1a9 1 days ago 412MBCopy the code

Uh… It doesn’t seem to have any special effect, but if the Dockerfile is very long you can consider reducing the level, because dockerfiles can only have 127 layers at most.

Use multilevel builds

Docker can support multi-stage build after upgrading to Docker 17.05. In order to make the image more compact, we use multi-stage build to package the image. Before multilevel builds we used to use one or more Dockerfiles to build images.

3.1 Single-file Construction

Use a single file to build before multilevel builds. A single file contains all build processes (including dependencies, builds, tests, and packaging) in a single Dockerfile:

The FROM golang: 1.11.4 - alpine3.8 AS build - envENV GO111MODULE=offENV GO15VENDOREXPERIMENT=1ENV BUILDPATH=github.com/lattecake/helloRUN mkdir -p /go/src/${BUILDPATH}COPY ./ /go/src/${BUILDPATH}RUN CD /go/ SRC /${BUILDPATH} && CGO_ENABLED=0 GOOS= Linux GOARCH= AMd64 go install -vCMD [/go/bin/hello]Copy the code

There are some problems with this approach:

  • Dockerfiles get really long, and maintainability drops exponentially as more and more things are needed;

  • If there are too many image layers, the volume of the image gradually increases and the deployment becomes slower and slower.

  • The code is at risk of leakage.

 

Golang, for example, does not rely on any environment at runtime. It only needs a compilation environment, which has no role in the actual runtime. After compilation, the source code and compiler have no role in the image.

As you can see from the table above, the single-file build ended up taking up 312MB of space.

3.2 Multi-file build

Is there a good solution before multi-stage construction? Has, such as multiple file was used to construct or install the compiler in the build server, but the build installed on the server compiler this method we do not recommend, because the build installed on the server compiler can lead to build servers become very bloated, need to fit different language versions, dependence, error-prone, high maintenance costs. So we’ll just cover multi-file builds.

Multi-file build is basically using multiple Dockerfiles and combining them with scripts. Run, dockerfile. build, and build.sh.

  • Dockerfile.run is a Dockerfile of components that must be required by a runtime program. It contains the most minimal library;

  • Dockerfile.build is just for building.

  • Build. sh (dockerfile. run, dockerfile. build) (dockerfile. run, dockerfile. build) (dockerfile. run, dockerfile. build)

Dockerfile.build

The FROM golang: 1.11.4 - alpine3.8 AS build - envENV GO111MODULE=offENV GO15VENDOREXPERIMENT=1ENV BUILDPATH=github.com/lattecake/helloRUN mkdir -p /go/src/${BUILDPATH}COPY ./ /go/src/${BUILDPATH}RUN CD /go/ SRC /${BUILDPATH} && CGO_ENABLED=0 GOOS= Linux GOARCH= AMd64 go install -vCopy the code

Dockerfile.run

FROM alpine:latestRUN apk –no-cache add ca-certificatesWORKDIR /rootADD hello .CMD ["./hello"]Copy the code

Build.sh

#! /bin/shDocker build -t -rm hello:build. -f dockerfile.buildDocker create-name extract Hello: Builddocker cp extract:/go/bin/hello ./hellodocker rm -f extractDocker build -- no-cache -t -- rm hello:run. -f dockerfile.runrm -rf ./helloCopy the code

Execute build.sh to finish building the project.

As you can see from the table above, a multi-file build greatly reduces the footprint of the image, but it has three files to manage and is more expensive to maintain.

3.3 Multi-stage construction

Finally, let’s look at the much anticipated multi-stage build.

We only need to do a complete build multi-stage used multiple times in the Dockerfile FORM statement, each time the FROM instruction can use different base image, and each time the FROM instruction will begin a new build, we can choose to a phase of the build result to another stage, in the final image would leave behind only the result of the last build, This makes it easy to solve the problem mentioned earlier and only requires writing a Dockerfile file. It’s worth noting here that you need to make sure Docker is version 17.05 and above. Now let’s talk about specific operations.

In Dockerfile you can use as to alias a stage “build-env” :

The FROM golang: 1.11.2 - alpine3.8 AS build - envCopy the code

Then copy files from the image of the previous stage, or from any image:

COPY - from = build - env/go/bin/hello/usr/bin/helloCopy the code

Look at a simple example:

The FROM golang: 1.11.4 - alpine3.8 AS build - env ENV GO111MODULE=offENV GO15VENDOREXPERIMENT=1ENV GITPATH=github.com/lattecake/helloRUN mkdir -p /go/src/${GITPATH}COPY ./ /go/src/${GITPATH}RUN cd /go/src/${GITPATH} && CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go install -v FROM alpine:latestENV apk -- no-cache add ca-certificatesCOPY --from=build-env /go/bin/hello /root/helloWORKDIR /rootCMD ["/root/hello"]Copy the code

Run docker build -t -rm hello3. Docker images is the size of the image:

Multi-stage builds bring us a lot of convenience. The biggest advantage is that they reduce the maintenance burden of Dockerfile while keeping the running image small enough. Therefore, we highly recommend using multi-stage builds to package your code into Docker images.

— Activity recommendation —

Technique salon 001 | AI China: an agile intelligent business support | of the technique salon of the appropriate letter on March 28, 8 PM online live, click sign up