Although it has been popular for a long time, “microservices architecture” is becoming more and more important and will continue to be popular. Companies and technologists are sharing knowledge and experience about microservices architecture, but we find that the articles on the web are either very useful or very technical in terms of what microservices architecture is. No one has matured enough to spread the technology out, and at the same time perfectly caters to “newcomers to microservices” by explaining microservices architecture in plain English from scratch. So, we invited Su Hua from Qingliuyun to work with InfoQ to create a microservices architecture project called Re: Microservices Architecture from Scratch, which will open the way for technical people who are not yet familiar with the field, and also help microservices architecture veterans learn from the past.

This is the fifth article on how to support microservices with Docker.

Thematic article delivery

  • Re: Revisiting the microservices architecture
  • Fast and fast experience microservices architecture?
  • Development and governance of microservice architecture apis
  • How to ensure data consistency in microservices architecture

Coincidentally, Docker released version 0.1 in March 2013, one year after Martin Fowler gave the definition of microservices architecture. After several years of development, two unrelated technologies have finally come together: the use of Docker to support the development and operation of microservices architectures.

We can even say that without the vigorous development of Docker, there would be no landing and flowering of micro-service architecture. SoundCloud’s Phil Calcado recently tweeted this:

“Microservices came to be because of containers” is a myth. We were already doing Microservices on Fvcking Weblogic.

At first I wondered why he said this, but later I realized that he said “FVcking Weblogic”, and in an interview with InfoQ back in 2015, he admitted that there was a better way to implement the pipeline of microservices architecture: “That was the first time I realized that we had screwed up. There were simple, incremental ways to solve the problem of rapid build of development environments, basic monitoring, and rapid deployment of applications without having to build your own systems.” See:

www.infoq.com/cn/news/201…

What do you think?

Don’t talk about it. In previous articles, we’ve talked about the strengths and weaknesses of microservices architecture, when to choose a microservice architecture, which technologies and components to use to develop microservices, how microservice apis are governed, and more.

At the same time, docker-based micro-service platform is also mentioned to support the automatic development, operation and maintenance of micro-services, but it is not in-depth. Therefore, this article will discuss in detail how to use Docker to improve the development efficiency and operation efficiency of micro services.

This article is intended for those who want to implement lightweight micro-service architecture with the help of Docker but don’t know how to do it. Therefore, I will continue the style of the previous article and try to introduce it from the following aspects in simple language.

  • Docker core concepts

This section describes the concepts of Docker images, containers, and image repositories.

  • Why use Docker to implement microservices architecture

Introduce the advantages of using Docker to implement microservices architecture.

  • Must-know Docker knowledge

This section describes image construction, container creation, container orchestration, cluster management, file storage, container network, container monitoring, and container logs.

  • Run a quick microservices architecture Demo

Contains the registry (Eureka), call chain (DCTrace), Service A, Service B, Gateway (Zuul), and Demo portal

Docker core concepts

Image

We can think of it as: mirror = operating system + runtime environment + application. For example, we can make an image of the Centos7 operating system, JVM, and Java application. Instead of delivering zip packages or WAR packages, we deliver images. Docker image technology to achieve the application running environment and host environment independent.

Container

The container is the running state of the mirror. You can quickly create one or more containers based on images using the Docker run command.

Mirror repository (Registry)

During the iterations of a project or product, the versions of the application are stored in a mirror repository, similar to a code repository for Source Code or Maven repository for Jar files.

Why use Docker to implement microservices architecture

As mentioned in the previous section on how to quickly build a microservice architecture, we need to use automated build and monitoring tools to solve the problem of difficult operation and maintenance of the microservice architecture. Before due to space problems, the operation and maintenance of micro services did not start to talk about. So, here we take a detailed look at what benefits Docker brings to microservices development and operation.

Environment dependency isolation

We often run into the phenomenon of “apps that work fine on my machine, but don’t work in production”. Docker isolates the requirements of the application on the environment. It makes images of the underlying libraries or components that the application depends on, thus ensuring the consistency of the development, testing and production environments.

Computing Resource Isolation

Before Docker, it was very difficult to deploy multiple homogeneous or even heterogeneous applications on the same machine. Different applications may have conflicts in the underlying libraries they depend on, and different applications may seize computing resources such as CPU and memory. Docker uses Linux Namesaces, Contorl groups, Union file system and Container format to isolate resources (such as CPU, memory, IO, etc.). Ensure that multiple applications on the same host do not seize resources from each other.

Higher computing resource utilization

Under the microservice architecture, the system is disassembled from a single program into a number of independently deployed programs, and each program is in an independent Web container, which inevitably increases the demand for computing resources. Docker can help offset this.

Based on the resource isolation feature, we can deploy multiple applications on the host, utilizing computing resources in every corner of the host. So, with Docker, we can improve computing resource utilization by 5 to 10 times.

For example, if we deployed the application on a 4-core, 8-GIGAByte host and the actual memory required for the application was only 1GB, we could deploy 7 similar applications on the same host (with 1GB left over for the operating system). For our company, this increase in utilization was particularly noticeable in developing test environments.

Migration is convenient

Before Docker, when a release was released, the deliverable was a package plus a document of the environment configuration. Now, the deliverable is a Docker image that already contains the latest version of the package and the modified runtime environment. When it goes live, a single command releases the latest version of the program. This is an absolute lifesaver for students who need to go online in the middle of the night.

Easier version management

Previously, version management was more about source code, such as opening a Branch or tagging. Now, the version is an image that contains the runtime environment and packages. In the event of a launch failure, you can quickly roll back to the previous version.

Orchestration support

In microservice architecture, a system consists of multiple packages, and there are dependencies between the packages. Docker choreography tools can help manage these dependencies to create an entire system with one click.

Faster operation and maintenance

Docker can not only conveniently manage application system, but also conveniently manage DB, Redis, MQ middleware. In addition to being able to quickly create and start these middleware, we can mirror the optimized configurations of these middleware based on the requirements of the system. Instead of a base installation file with a bunch of configuration instructions, middleware developers end up with a standardized image.

The environment can be rebuilt

Previously, the runtime environment was made up of the base environment plus a bunch of configuration documents, which could be inconsistent with the real environment if managed carelessly. In Docker, Dockerfile is used to create the operating environment. Every modification of the environment is a Dockerfile command. Therefore, the creation of the operating environment has become procedural and standardized. In this way, the required operating environment can be quickly rebuilt to further ensure the consistency of development, testing and production environments.

As of now, Docker supports almost every development language except.NET. For.net Core, version 1.0 was released in June 2016 and version 2.0 was released in August 2017, so it’s fair to say Microsoft is still working on it. I just haven’t heard of any of my friends moving.net projects onto.NET Core. Maybe it will take some time to observe and settle down, we also made a.net Core based demo and put it into qingliuyun mirror, continue to pay attention to. If any of you are using.net Core in production, please feel free to comment.

Must-know Docker knowledge

The concept of Docker is “Build, Ship and Run Any App, Anywhere”, which makes DevOps easier through the features of containers and images. However, the prospect of Docker lies in supporting distributed and service-oriented design. To achieve a series of services that can be independently developed, deployed and expanded to ensure the flexibility and stability of the business.

At present, AWS, Microsoft, ali cloud, IBM, Redhat, VMware, huawei, Intel and other major public cloud and private cloud providers are all investing heavily in Docker, which is actually recognizing such a trend.

To build microservice architecture with Docker, we need to understand some necessary Docker knowledge, such as image construction, container creation, container choreography, cluster management, file storage, container network, container monitoring, and container log.

Take a microservice system containing ABC components as an example, we will use continuous integration tools (such as Jenkins) to create an image and push the image to the image repository (such as Docker Registry, Harbor), Create and start the container using a choreography tool, such as Docker Compose.

After the container is started, the ABC component will be started together with the container. At this time, we need to consider how the data files of the ABC component are stored persistently, how the components distributed on different hosts communicate on the network (Docker containers cannot communicate across hosts by default), how the container resource usage is monitored, how the container logs are viewed, and so on.

Here, a brief introduction to these knowledge.

Image building

At its simplest, a single command can pull an image from a Docker Registry (similar to a code repository or Maven repository) for a specific version to the local repository. For example docker pull mysql:5.6, 5.6 is the version number of the image.

In addition to pulling ready-made images from the mirror repository, you can also use Dockerfile to create custom images.

Because the domestic access to Docker Hub network speed is slow, the time to pull the image will be relatively long. We can pull from some domestic mirror warehouse, or configure Ali’s mirror acceleration to pull, or build a mirror center.

  • Netease Image Center: c.163.com/hub#/m/home…
  • Alibaba Developer Center: dev.aliyun.com/
  • Self-built mirror center: Docker Registry or Harbor

Another concept to understand about mirroring is layers. For an image, there are many layers. Each Dockerfile command creates a new layer, but if this command produces the same result as the previous Dockerfile command, Docker will reuse the layer that was created before.

For example, there are three steps in Dockerfile: pull CentOS, install Tomcat, and upload the package. Therefore, when creating a new package, CentOS and Tomcat do not change, so this image creation only creates a new layer for the “upload package” step.

This is why the second execution of the Dockerfile is much faster than the previous one. Therefore, we need to try to put the steps that cause the mirror layer changes in the latter part of the Dockerfile.

Container to create

If an image is analogous to a Java class, a container is an object created from that class (image). With the image created, a single command can create multiple container instances based on the image:

__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__docker run -d --name=helloworld Helloworld :2.0__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__Copy the code

The container arrangement

In microservice architecture, a system has more than N components, such as user center, configuration center, registry, business component, database, cache service, and so on. If we want to create and launch these components with one click, we need container choreography tools, such as Google Kubernetes, Docker native Docker Compose. Kubernetes offers more advanced features, while Docker Compose is relatively easy to use.

Here is a code example to create Spring Cloud Eureka and MySQL using Docker Compose. And we’ll talk about that again in a later Demo.

Cluster management

In most cases, the system needs to be deployed on multiple hosts (multiple Dockers) for performance and high availability reasons, so it is not practical to manually select hosts and create containers on each host individually.

Kubernetes and Docker Swarm both solve this problem well. The function of Kubernetes is more perfect, resource scheduling, service discovery, operation monitoring, capacity expansion and reduction, load balancing, gray upgrade, failure redundancy, disaster recovery, DevOps and other proficient, can achieve a large-scale, distributed, highly available Docker cluster. The advantage of Swarm is that Docker was originally built and the API exposed is consistent with Docker API, making it easier and controllable to use.

File storage

One prerequisite for running an application or database in a container and taking advantage of the rapid creation, destruction, and extension of running instances of the container is to keep the container stateless, which means keeping the data files generated by the application or database outside the container. Docker provides four methods for persisting application data.

  • Mount host files

When creating a container, you can add the -v parameter to add a mount directory. For example, to run the MySQL image, we need to mount the MySQL data directory to the physical disk and run the command

__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__docker run --name=mysql -d - p3306:3306 - v/data/mysql/data: / var/lib/mysql mysql: 5.6 __Mon Oct 30 2017 11:41:30 GMT + 0800 (CST) ____Mon Oct 30, 2017 11:41:30 GMT+0800 (CST)__Copy the code

Run this command to mount the MySQL data directory /var/lib/mysql in the container to the /data/ MySQL /data directory on the host.

Note: when using the -v parameter, remember that files in the host’s mount directory will overwrite files with the same name in the container. If distributed storage is not required, this simple approach is recommended for data persistence.

  • Adding a Data Volume

This method of data persistence also uses the “-v” parameter to attach data to the physical machine. The only difference with the first is that you do not need to specify the directory to mount the host.

For example, run the MySQL image and execute commands

__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__docker run --name=mysql -d -p3306:3306 -v /var/lib/data mysql:5.6__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__Copy the code

You do not need to specify the host directory. We can use “Docker Inspect ContainerName” to see the actual location of the mount directory on the physical machine.

Note: After the container is uninstalled, the data volume will not be deleted automatically. This method of data persistence is generally not recommended.

  • Use a data volume container

If data needs to be shared between containers, for example, container A needs to use data from container B. Docker also provides a container data volume approach. First, we need to create the data volume container B:

__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__docker run --name=containerB -v /dbstore training/postgres__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__Copy the code

After creating container A, run the –volumes-from parameter to mount container B:

__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__docker run --name=containerA --volumes-from containerB  training/postgres__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__Copy the code

Thus, the/dbStore data directory is accessible in container A. This type of data persistence is also rarely used.

  • Third-party storage plug-in

Docker supports third-party storage to access Docker in the form of plug-ins. In this way, storage is host-independent. Docker supports a wide range of storage plug-ins, such as Contiv, Convoy, Netshare, Azure, and more.

  • Contiv: Currently supports distributed storage Ceph and NFS file systems.
  • Convoy: Supports Devicemapper and NFS file systems.
  • Netshare: supports NFS 3/4, AWS EFS, and CIFS file systems.
  • Azure File Storae Plugin: Supports Microsoft Azure File system.

In microservice applications, applications need to share files across multiple hosts. You are advised to use third-party storage plug-ins to expand storage resources.

In short, the simplest approach is to mount the local directory into the container and map the local directory to the file storage server so that multiple container instances share the same file data.

Container network

Container network can be regarded as the most complex part of Docker. Docker has built-in network drivers of None, Bridge and host.

  • None: If network is specified as None when creating the container, there will be no network in the container. When we used a third-party network plug-in, we chose to create the container as a network driver of type None.
  • Bridge: Docker uses this method by default when creating containers if no network driver is specified.
  • Host: If the host Network is used by the container, the container and the host reside in the same Network Namespace. In this way, data packets do not need to be converted and the Network performance is basically the same as that of the physical machine.

The networks described above are for single hosts only. The following two types of SDN networks that communicate across multiple hosts are introduced.

  • Calico

Calico is a pure three – layer network communication technology. Linux routing table and iptables are used to implement network forwarding and isolation. Calico’s performance is closer to that of a physical network than other networking technologies. However, Calico has an intrusion into the physical network.

To realize the layer 3 network, the primary implementation of route discovery. Calico uses BGP to discover routes. However, not all routing devices support BGP. In China, Ali Cloud ECS does not support BGP route discovery between multiple instances. In the cloud service environment, if BGP is not supported, Calico provides tunnel mode, but its performance is far less than the former.

  • Overlay

Overlay Communication principle between containers: Overlay encapsulates packets sent by containers into a layer 3 network and sends them to other hosts (for example, M) through UDP. Host M receives the packets and sends them to the corresponding container for unpacking by VTMP. Its performance is not as good as Calico network because it needs to be packed and unpacked.

The container under the Overlay network communicates with the external network, again using a bridge network, but instead of a Docker0 network, this network is a docker_gwbridge. Overlay performance is poor compared with Calico network.

However, as The Overlay network has been made into a cross-host communication network in Docker 1.9 version, it has advantages in subsequent upgrades and maintenance (strong and popular Docker community support).

In the implementation of microservices, bridge and host networks are recommended if there is no need for cross-host communication between containers. If cross-host communication is required, Calico network is recommended because of high network performance requirements. There are no special requirements on network performance. Overlay network is recommended.

Container monitoring

After allocating CPU, memory, and other computing resources to a container, it is natural to monitor the resource usage of the container. This can also be done with open source tools such as Google’s Cadvisor and Prometheus.

We haven’t done any work on Prometheus, so if you’re interested, try it. For Cadvisor, there is no alarm capability, although it provides a graphical interface to view resource usage.

Therefore, we use Cadvisor + InfluxDB + Grafana to check the container usage and connect the InfluxDB data source to the self-developed monitoring platform to realize the alarm function.

Container log

Docker container logs come from the container’s standard output stream (STDOUT) and standard Error stream (STderr). In microservice development, logging components such as log4j, SLF4J, and LogBack are used to output logs to files and standard output streams by default.

After the Docker container is deployed, use the -v parameter to attach important log files to the host disk for permanent storage when creating the container. If logs are entered into the standard output stream, we can view the logs through the “docker logs” command.

Note: If the amount of logs viewed by Docker logs is very large (more than 1GB), the Docker will appear suspended animation. Therefore, when using the Docker log command, we recommend adding the –tail parameter to specify how many last lines of log information to view.

Excessive logs occupy a large amount of disk space. When creating a container, run –log-driver and –log-opt to specify the number and size of log files generated by the container. Such as

__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__docker run --name=mysql -d ---log-drive=json-file --log-opt=max-size=100m; Max-file =1 mysql:5.6__Mon Oct 30 2017 11:41:30 GMT+0800 (CST)____Mon Oct 30 2017 11:41:30 GMT+0800 (CST)__Copy the code

At this point, we have briefly introduced the functions and matters needing attention of using Docker to support microservices architecture development. Below, we use a Demo to complete an example of the rehearsal.

Run a quick microservices architecture Demo

Before you start, you can check out the Demo at msa.qingliuyun.com/.

Request processing flow:

1. After internal services A and B are started, register their service addresses with the Eureka.

2. The API gateway pulls the service addresses of service A and service B from the registry.

3. The Client invokes the API Gateway.

4. The API gateway forwards the call request to internal Service A.

5. After receiving the request, internal Service A forwards the request to Service B.

6. Internal service B returns the processing result. The internal service B returns the request result to the client.

For the length of this article, we have uploaded the Demo source code

Github.com/qingliuyun/…

If you’re interested, you can check it out. It’s very simple.

At the same time, a document has been prepared that describes the two ways to install Demo:

  • Docker + Docker Comopose;
  • Based on Qingliuyun micro-service platform;

Link address:

Docs.qingliuyun.com/pro/images/…

The documentation contains the database initialization scripts and Docker Compose script files required during the installation process.

Write in the last

So far, we have completely introduced the concept of microservice architecture, advantages and disadvantages, application scenarios, microservice development and API governance based on Spring Cloud, and how to use Docker to support the development, operation and maintenance of microservices.

While I can’t cover every point in detail, I hope this series has helped you get a good overview of the microservices architecture. I would also like to thank you for your questions, suggestions and corrections during the publication of this article. I sincerely hope that the micro-service architecture can make your system run better.

Finally, I would like to thank Tian Guang for his suggestions and wish InfoQ a better and better future.

The authors introduce

Su Huai, wechat account Sulaohuai, Research and development Director of Qingliuyun, now works for Qingliuyun team of Digital China, once worked for Oracle, Singapore Telecom and other enterprises. Expertise in container technology, microservices architecture, agile development and technology management.

Next article

There is no next article, this series is over here, thank you for your attention, and thank you for miss Su Huai and miss Lamb for sharing!

Thanks to Yuta Tian Guang for planning and reviewing this article.