The panoramic business security risk control system of Top Image is constructed based on the new-generation risk control system, and Docker technology is used to deploy private cloud and public cloud. This paper mainly shares the advantages of the private deployment of Docker container technology and the top image risk control system as well as the application practice of Docker container technology in the top image.

Docker container Technology Overview

Docker is an open source container engine. Docke takes Docker container as the basic unit of resource segmentation and scheduling, encapsulates the runtime environment of software, and is used as a platform for rapidly constructing, publishing and running distributed applications.

Docker container is essentially a process on the host. It implements resource isolation through namespace, resource restriction through cgroups, and efficient file operation through copy-on-write. A container is an abstraction of the application layer, packaging code and dependencies together. Multiple containers can run on the same machine and share an operating system kernel with other containers, each running as an isolated process in user space.

Figure from the website

Docker engine consists of Docker daemon (Dockerd command), REST API and Docker client (Docker command). Docker uses C/S architecture, Docker client communicates with Docker daemon, Docker daemon is responsible for building, running and distributing Docker containers. Docker clients and daemons can run on the same system, or Docker clients can be connected to remote Docker daemons. Docker clients and daemons use REST apis to communicate over UNIX sockets or network interfaces. Docker containers are based on open standards and run on all major Linux distributions, Microsoft Windows, and any infrastructure including virtual machines, bare metal machines, and the cloud.

Technical advantages

Docker’s ability to separate applications from the infrastructure makes it possible to deliver software quickly. With Docker, you can manage the infrastructure as if it were an application. The host does not need to care about the dependencies needed by a particular container to run. As long as it can run Docker, it can run all Docker containers. Containers isolate software from its surrounding environment and help reduce conflicts between teams running different software on the same infrastructure.

Docker container has realized the standardization of the application environment. We can make our own images for different applications and their different versions to achieve continuous integration, rapid delivery of applications, rapid update and rollback of applications. Docker image storage adopts the form of layered, different Docker containers share some basic file system layers, and at the same time, add their own unique change layer, which greatly improves the efficiency of storage. Through a reasonable image construction method, the storage space required by images will not increase linearly with the number of images. On Docker Hub, the official image warehouse of Docker, we can find 100,000+ images. Under the unified management of the image warehouse, we only need to pull its images, and then run command can quickly build the required environment and deploy the application. Based on the Dockerfile provided by Docker, we can freely build our own images on the base image.

Compared with the previous virtual machine based deployment, Docker container is more lightweight and portable, and does not need to consider external dependency issues. Just like Java’s “Write once,run anywhere” feature, JVM shields the differences between different platforms. The “Build once, Run anywhere, Configure once, Run anything” proposed by Docker reflects its features of more convenient and lower deployment cost, which can not only effectively shield the differences between operating systems, Mixed deployment shields the impact of other applications, indirectly ensuring high availability of applications and improving resource utilization.

Docker container choreography and cluster management

When Docker containers gradually increase, application dependencies become complex and require multiple components, Docker container choreography can be used. Choreography is a broad concept that refers to container scheduling, cluster management, and possibly other host provisioning configurations. Docker provides Compose and Swarm tools for quickly orchestrating container clusters. Compose is an application choreographer that defines and runs multi-container, multi-service, and Swarm cluster configurations (Define Application Stacks Built using multiple containers, services, And swarm Configurations.), using a YAML file to configure the application’s services, and then using commands to create and start all of the services in the configuration for rapid deployment.

As applications scale to multiple hosts, managing each host system and abstracting the complexity of the underlying platform becomes more challenging. Swarm is a container cluster management tool that makes it easy to deploy container cluster services across hosts. Swarm itself does not support cross-host container management because it can only connect to one Docker client in its implementation. Swarm abstracts a docker engine cluster from multiple hosts into a virtual docker host. In a cluster, an application or component discovers its operating environment and information about other applications or components through service discovery, such as Consul, Etcd, and ZooKeeper. Docker1.12 and later Docker1.12 has built SwarmKit into Docker1.12. SwarmKit is a Swarm upgrade that builds key/value stores into SwarmKit, which can load tasks on different machines by building Swarm clusters. Swarm A node has two roles: manager and worker. The Manager node implements Raft consistency algorithm to manage the global cluster state. In combination with the Compose YML V3 syntax, the Manager node can easily manage the configuration of all applications and effectively implement cross-host container choreography and cluster management.

Top private deployment as risk control system, in addition to consider the business and user data security issues, also considered the reliance of the infrastructure and isolation, rapid deployment, delivery, update, etc., to ensure all components and services can be rapid expansion and elastic, not only meet the test of a small scale business took off, can adapt to business development, expansion and dynamic support to realize high QPS.

Docker container technology in the top image inside the application

At present, Docker container technology has been implemented on a large scale inside Top Image, and all applications are deployed, delivered and updated through Docker container. Here are a few simple practical examples:

1. In a Docker container, we usually only run one application. When using the container layout, different applications will take different startup time, and the time spent will be related to the machine performance. Depends_on, links and other parameters can control the order of service startup, but it is not known whether the application in the container has been started. When a service must depend on another service, the interval between the container startup should be controlled, or the waiting time should be reserved in the command to start the application. The two services can also be choreographed to start separately.

Compose is also useful for creating an image. The build parameter in YML specifies the path to the Dockerfile, and the image parameter specifies the name of the image. Docker-compose provides commands such as build, push, and images to batch create images for all applications, or specify a service name to create images for a single service. Meanwhile, YML supports environment variables, which can be dynamically adjusted by setting environment variables through the Linux export command. Docker-compose config command can check the correctness of YML files and preview the final file contents.

3. The Docker container log output is the console log, and stored in the/var/lib/Docker/containers under the directory named after the container ID, in most cases, we just need to the necessary content of the log output to a file, then mount to the host machine, Docker run provides the log-opt parameter to control the log size, and docker run provides the logging max-size parameter in Compose.

4. Docker provides network modes such as Bridge, host, overlay, container, etc. In actual use, there are often communication scenarios across host containers. Selecting different network modes and reasonably allocating application deployment can improve application performance.

5. To establish a continuous integration environment by Jenkins, automated build code, can quickly put the application into the mirror and automatically deployment, will build the results sent to the Sonar, show the single test coverage, basic bug detection code, and the failure of the build in mail way inform relevant developers, to release the mirror to mirror the warehouse. Based on the Docker private warehouse, the release and update of the application obtains the image distribution from the warehouse, names containers of different versions differently, and retains containers of old versions for timely rollback.

www.dingxiang-inc.com/blog