Translator: Wang Yanfei

Dz.date /bPpg

More and more organizations are moving away from monolithic applications in favor of a microservices architecture, where business processes are divided into independent services.

In a flight booking, for example, there may be many separate processes involved: booking a flight with an airline, paying for it, and sending a confirmation message to the customer when the flight is successfully booked.

Microservices architecture is the separation of processes into independent services based on business. In the example above, the flight booking service can be split into flight booking, payment, and confirmation, and the split microservices can communicate with each other through an interface.

So, what’s the difference between microservices and individual applications?

Comparison 1: network latency

There is a basic law of physics at work when it comes to microservices that bytes are sent over the network whenever a microservice calls another service over the network, which involves converting the bytes into electrical signals or pulsed light, and then converting those signals back into bytes. According to the simulation results, the wait time for microservice invocation is at least 24ms. If we assume that the actual processing takes about 100 milliseconds, the total processing time is as follows:

The assumption is that, ideally, all invocation executions can occur simultaneously and are independent of each other — this is called the Fan-out pattern. The figure below shows how the total time decreases as more and more calls are executed simultaneously.

Executing all calls in parallel means that the longest call is completed and the service is returned to the consumer.

As you can see from the figure above, a single application has no network latency because all calls are local. Even in a fully parallelized world, singleton applications are still faster. However, microservices need some network delay even if they are called in parallel because they need communication between multiple services.

This time, the singleton application wins.

Contrast 2: Complexity

There are many factors at work when considering complexity: the complexity of development and the complexity of running software.

Because of the complexity of development, the size of the code base can grow rapidly when building microservices-based software. Because microservices involve multiple source code, they use different frameworks and even different languages. Because microservices need to be independent of each other, there is often code duplication.

In addition, different services may use different versions of libraries because of inconsistent development and release times.

For logging and monitoring, in a single application, logging is as simple as viewing a single log file. However, for microservices, tracing problems can involve examining multiple log files. Not only do you need to find all the relevant log outputs, but you also need to put them together in the right order.

The complexity increases further when running microservices in a Kubernetes cluster. While Kubernetes has enabled features such as elastic scaling, it is not an easy system to manage. To deploy a singleton application, a simple copy operation is sufficient. To start or stop a singleton application, it usually takes a simple command. Transactions also add complexity to running microservices architectures compared to individual applications. With calls across services, it is difficult to ensure that data is synchronized. For example, a retry might perform two payments for an improperly executed call.

This time, monomer applications win again.

Comparison 3: Reliability

In microservices, if service A calls service B over the network with 99.9% reliability (meaning that 1 in 1000 calls will fail due to network problems) and service B calls service C, we will get 99.8% reliability.

Therefore, when designing microservices architectures, consider that at some point the network will disconnect. Microservices offer some solutions to this problem. Spring Cloud provides load balancing and network fault handling, and service grids such as Istio can handle services in multiple programming languages. When a service fails in a microservice cluster, the cluster manager provides an alternative. This makes microservices architectures highly resilient.

Netflix created a tool called Chaos Monkey that simulates random terminations of virtual machines and containers. Microservice developers can use the Chaos Monkey tool to simulate problems such as network disconnections and network failures in their test environments so they can ensure their systems can handle downtime in production environments.

All calls in a single application are made locally, so network failures rarely occur. However, a single application cannot meet the requirements of elastic scaling in a cloud environment.

In the end, microservices won.

Comparison 4: Resource usage

In general, microservices use more resources than individual applications. Even when running in Docker, the benchmark found that while the number of service connections was down 8%, container choreography was still consuming resources, as was log aggregation and monitoring.

But incognito allows us to use resources more intelligently. Since the cluster manager can allocate resources as needed, the actual resource usage is likely to be much lower.

In software, 20% of the code typically does 80% of the work. If one instance of a single application uses 8GB, two instances use 16GB, and so on. With microservices, we were able to extract 20% of the code responsible for major functions in a single application into a single service, so our RAM usage was reduced to around 9.6GB for both instances.

The following figure shows the difference in resource usage.

In terms of resource utilization, microservices win.

Contrast 5: Accuracy of extension

A single application can be extended in many ways, by running multiple instances, or running multiple threads, or using non-blocking IO. All three are also applicable to microservices architectures.

However, in the face of increasing requests from clients, scaling individual services becomes more sophisticated due to a more sophisticated microservice architecture. So, for microservices, extensions are simple and precise. Moreover, because microservices consume less resources, they can save resources.

The precise scaling and less resource usage of microservices compared to individual applications is a clear win.

Contrast 6: Throughput

Let’s look at one more performance metric – throughput. In microservices architecture, data needs to be sent between different services, which incurs some overhead. If a microservice is not a distributed architecture, its throughput is not as high as that of a single application.

Comparison 7: Deployment time

One of the reasons people choose microservices architectures is to save deployment time and allow for rapid iteration.

Any changes to microservices are clear because of their single responsibility principle. Changing the functionality of a single app, however, can be a “whole thing”.

In addition, microservices are easier to test. Because microservices cover only a limited set of functions, they have low code dependencies, are easy to write tests and run fast.

Also, microservices consume less resources and can scale. This allows for unobserved deployment of microservices, for example, by starting a new version of the microservice on a subset of the cluster nodes and then migrating a subset of users to the new version, which can be quickly rolled back to the older version if there is a problem.

Victory is due to microservices.

Contrast 8: Communication

Before microservices came along, Fred Brooks wrote a seminal book, The Myth of the Man-Month, in which, among other things, the number of communication channels increases with the number of team members. A team of two people has only one channel of communication. If you have four people, you can access up to six channels. The formula for the number of communication channels is n(N − 1)/2. A team of 20 developers has 190 possible communication channels. By splitting these developers into two teams, you can greatly reduce the number of communication channels.

Let’s take a team of 20 developers and divide it into four microservices teams (five people per team) with 10 communication channels per team. There are only six communication channels between the four teams. The total number of communication channels was 46, or about a quarter of a team of 20.

The figure below shows the number of communication channels for a large team compared to the number of communication channels for a single microservices team.

Therefore, breaking up more than 10 developers into smaller teams can provide greater communication efficiency for any development project.

This is another clear victory for microservices.

Who is the winner?

Individual apps won three games and microservices won five.

When looking at this chart, however, keep in mind that it is relative. Microservices are not a panacea for all development problems.

For example, a small team of five developers might prefer a monolithic application. Not only are individual applications easier to manage, but if the software product has only a few visits per second, individual applications may be sufficient.

Here are some signs that a microservices architecture might be a good choice:

  • Requires 7 x 24 reliability
  • Precise extension
  • Peak and normal load are obviously different
  • A team of over 10 developers
  • Business domains can be subdivided
  • Method call link is short
  • Method calls can use REST APIS or queue events.
  • There are few transactions across services

recommended

  • Analysis of Netflix microservice architecture design
  • Why does Ali specify that rollbackFor needs to be specified in the transaction annotation @transactional?
  • Gradle, the next generation build tool, is no better than Maven.
  • How to improve your Nginx performance by 10 times?
  • Interview must understand the principle of Redis distributed lock implementation!

Learning Materials Sharing

12 sets of core technology materials of micro services, Spring Boot and Spring Cloud. This is part of the data catalog:

  • Spring Security authentication and authorization

  • Spring Boot Project practice (background service architecture and operation and maintenance architecture of small and medium-sized Internet companies)

  • Spring Boot Project (Enterprise rights Management project)

  • Spring Cloud Micro-service Architecture Project (Distributed transaction solution)

  • .

Public account background replyarch028Access to information: