I have to say, this is a rare, well-written book, and it does not overhype microservice architecture because it is written. There are two sentences in it that make a lot of sense.


Architecture is about trade-offs, and the architect is the person who makes the trade-offs

The famous “CAP theorem” is a trade-off between three options. For example, Redis Cluster chooses high availability at the expense of consistency. Hbase clusters ensure strong consistency but abandon high availability.

The LRU algorithm of Redis uses random sampling method + elimination pool to approximate LRU, which saves memory space but sacrifices some accuracy, which is also a trade-off.

When we design the structure of the database table, we also use the method of adding redundant fields to avoid the association of multiple tables, which is a trade-off between space and time.

There is no best architecture, only the most suitable architecture, which requires our architects to face trade-offs and make trade-offs in the context of current system characteristics and external circumstances.


No single technology can be called a “silver bullet,” nor is microservice architecture a “silver bullet.”

This kind of phenomenon, just like the previous two years, “suddenly like a night spring breeze, all the maladies called”, do program development, do not pull two words, as the time to abandon the same. But if you ask, “What’s the difference between a middle platform and a platform?” An estimated 80% of the people mumbled and couldn’t answer.

Micro-service architecture is not a “silver bullet”, I do not recommend at all, if your graduation design to do an online mall, must be built in the way of micro-service. Similarly, if two or three people in the entrepreneurial team quickly build a business system for trial and error in the case of zero accumulation, I strongly do not suggest that the system should be microservated at the beginning, and declare that “the project has enough scalability” or “the architecture design for the next few years”. Because, the system architecture must evolve over time, different team situation, will be applicable to different system architecture.


The advantages of microservices written in the book are analyzed, item by item:

Better fault tolerance:

This should be uncontroversial. If service A itself or the database it belongs to is suspended due to resource exhaustion, or the response time is long, it will not affect service B and SERVICE C, but if service ABC’s business is in A single application, it will all be suspended.

However, it is important to note that no matter how thoroughly your microservices are split, there will probably be an aggregation service on top of it, which is responsible for obtaining data from several underlying microservices, reprocessing and recombining, and finally returning to the end. Something like this:

Because the fault tolerance of the underlying microservices is inherent in the microservices architecture, the fault tolerance of the aggregation layer, which can not be solved by a microservices architecture, in fact, it depends on the ability of engineers.

For example: how to distinguish between strong and weak dependencies, how to set the timeout period? Is the timeout set as a whole, or is it set separately for different downstream services? In the face of non-strongly dependent downstream, how to deal with timeout or exception, downgrade or circuit breaker? In the face of strong downstream dependence, how to do a good job of Plan B, even at the expense of some design principles? In the most extreme case, how do you do MVP (most simplified implementable product)?


It’s easier to implement and embrace new technologies

If a new technology is used on a small scale within a service, it is only necessary for several engineers who maintain the service to agree, and it is autonomously within the scope of work.

If a new technology is introduced on a company-wide basis, the situation becomes more complicated. With any new technology, a rigorous team of engineers is a process of practicing and understanding it gradually, with initial skepticism and awe, and finally fully mastering and building trust.

Thus, the microservices architecture is well suited to the trial-and-error process of moving from one service to multiple services, and from non-core services to core services. The trial-and-error cost of replacing a single application as large as AN SOA is too great because the whole system fails because of the new technology. If it is found immediately after the launch of the problem is good, the most afraid is, after the launch of the temporary no problem, and on a few versions of the business requirements, and then found the problem, this time the cost of rollback will become greater.

On the two sides of the coin, if a new technology is implemented company-wide, service-by-service, the time cost is much higher than for a larger monolithic application like SOA.


Enables continuous delivery and deployment of large complex applications

In the book, there are three aspects to the argument, which are summarized as follows:

  • Testability. Because each service is small, it is easier to perform automated tests and the program is relatively buggy.
  • Deployable: Modified services can be deployed independently without coordination with engineers of other modules.
  • Teams are autonomous, loosely coupled, and each team can develop, deploy, and extend their services independently of all other teams, making development faster.

Combined with my years of experience in front-line development and architecture design, the scope of changes brought about by business requirements is sufficiently controllable and should be in one of the following scenarios: \

(1) There is no traffic after the extension class interface goes online

② The interface is located at the upstream of the call chain, no other microservice invocation, visible on the end \

(3) Modify the class interface, but the interface itself is not important, allow trial and error

(4) Modify class interface, and is the core interface, but the scope of modification is small, the risk is controllable

⑤ Modify the class interface, and is the core interface, in the business of the absolute low peak period online, allow trial and error

Here are two counterexamples:

(1) Based on the e-commerce model, one of the most important services, the commodity center has changed a core interface, and the change momentum is large, this interface is called by multiple systems at the client and merchant end, the business traffic is large, and there is no obvious low peak period.

In that case, would you dare run an automated test and deploy online without alerting any callers? Besides, it’s not a low-probability scenario, is it?

(2) model based on electricity, goods need to add a new attribute, then added a field on the supply side of the goods, and around the new fields, there is business logic needs to be modified, so the supply side of the commodity, commodity center, order and promotion platform, client, merchant side, need to modify the whole, after the change, need to keep up with the downstream across service alignment, It is then deployed to test and pre-release environments for verification, along with the most annoying problem solving of microservices across multiple services, and finally brings each service online in order.

** In this case, testability, deployability, team autonomy, loose coupling are all out of the question? As for faster development, isn’t the overall cross-service deployment of microservices architecture, problem solving, test environment setup and deployment, and multiple services coming online in sequence more of a waste of time than a larger single application like SOA? Besides, this is not a low-probability scenario.

But there’s one big advantage to microservices, and it’s absolutely important, and we’ll talk about that next time.

Not ended.