• My Favorite Interservice Communication Patterns for Microservices
  • Fernando Doglio
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: samyu2000
  • Proofread by: CarlosChen, Finalwhy

I like the communication mode between microservices

Microservices are fun to build scalable, efficient application architectures. As a result, all the major platforms are using it. Without microservices, there would be no Netfli, Facebook or Instagram.

However, the first step in microservice system design is to break down the business logic into small modules and deploy them as distributed systems. Next, you need to know the best way to get them to communicate with each other. By the way, microservices are not only externally oriented, or in other words, serving clients; under the same architecture, they are sometimes also clients themselves, with access to other services.

So how do you make two services talk to each other? An easy way to do this is to use the same API and make it a public interface. For example, if the public interface is some REST HTTP API, then other services should communicate with each other through it anyway.

It works, so let’s see if there’s a better way.

HTTP API

Let’s start with the basics. After all, it’s very effective. In essence, an HTTP API’s function is to send messages back and forth, just like your browser or desktop client like Postman.

It uses client-server mode, which means that communication can only be initiated by the client. It is also a synchronous communication mode. Once the client initiates a communication, the communication ends only when the server responds.

This method is very popular, and it’s how we browse the web. You can think of HTTP as the backbone of the Internet, so all programming languages provide HTTP-related functionality modules, making it a popular approach.

But this model is not perfect. Let’s analyze it.

advantages

  • Easy to implement The HTTP protocol is simple to implement, and since all major programming languages provide native HTTP support, developers need pay little attention to their internal implementation mechanisms. Its complex internal implementation is hidden, abstracted in the form of a class library for programmers to use.
  • If you add a REST-like architecture on top of the HTTP protocol (implement it properly), then you create a standard API that helps any client quickly understand how to communicate with your business logic.
  • Since HTTP acts as a data transfer pipeline between client and server, it doesn’t matter what technology the client/server uses. You can write the server in node.js and the client (or other service) in Java or C#. As long as they follow the same HTTP protocol, they can communicate with each other.

disadvantages

  • Such channels delay business logic HTTP is a reliable protocol, but because it is part of the overall protocol, there are several steps to ensure that data is transferred correctly. However, this protocol also creates delays in communication (extra steps also mean extra time). Consider a scenario where three or more microservices need to pass data to each other until the last one completes. In other words, you get A to send data to B, send data to C, and then start sending back responses. In addition to the execution time within each service, there is a delay due to the time required to establish the three communication channels, including it.
  • Timeout Settings Although you can set the timeout time in most scenarios, HTTP closes the connection by default if the server takes too long. How long is “too long”? This depends on the system configuration and services you use, and there are some problems with this anyway, so it puts an additional constraint on your business logic: it needs to be executed quickly or it won’t work.
  • Failures are not easy to resolve It is not impossible to make a server fail free, but this requires additional infrastructure. By default, the client-server mode does not notify the client when the server fails. So it’s often too late for the client to know that the server has failed. As I said, you can take steps to reduce the confusion, such as using load balancing or API gateways, but you need to do some extra work outside of the client-server communication pattern.

So, when your business logic is fast and reliable, and you have multiple clients using it, the HTTP API is the ideal solution. It is useful because it is a standardized workflow that allows teams to use familiar communication channels when working with different clients.

The HTTP API should not be used if multiple services are required to communicate with each other, or if the business logic within the service contains time-consuming operations.

Asynchronous messaging

This pattern is to establish a message broker between the sender and receiver of the message.

This is indeed one of my favorite ways to enable multiple services to communicate with each other, especially if you need to scale the processing power of the platform horizontally.

This pattern usually requires the addition of a message broker, so there is some additional complexity to deal with. But the benefits outweigh the costs.

advantages

  • One of the main problems with easy extension of direct communication between the client and the server is that the server needs to have idle processing power in order to receive messages sent by the client. But there is a limit to how many parallel processes a single service can accommodate. If the client needs to send more data, the server needs to increase its processing power. This is sometimes achieved by extending the infrastructure where the service is deployed, such as using high-performance processors or increasing memory, but there are limitations to this approach due to the cost involved. Conversely, you can continue to use a low-configuration machine and build copies of parallel processing. Using a message broker, you can distribute received messages to multiple target services. So those copies of yours might receive the same data or different messages, depending on your needs.
  • Easy to add New services Connecting a new service to a workflow is just as easy as creating a new service and subscribing to the message types you need. The sender does not need to know it exists, only the type of message being sent.
  • A more convenient retry mechanism When the message broker allows, if the message fails to be sent due to a server failure, the message broker can automatically resend the message without us having to write the relevant code.
  • Event-driven this architecture that supports the creation of event-driven is the most efficient way for micro services to communicate with each other. You can write code to let microservers be notified when data is about to be delivered, without the service being jammed waiting for an asynchronous response, let alone polling the storage system for a response. If you can do that, you can solve more problems (similar to the next incoming request). This architecture enables faster data processing, more efficient use of resources, and a better communication experience.

disadvantages

  • Debugging is difficult Because there is no clear data flow, no data to work with, debugging the data flow and testing the payload’s path can be a nightmare. Therefore, it is a good idea to create a unique ID when the data is received so that you can trace the relative path of the internal schema through the log.
  • No direct response Considering the asynchronous nature of this pattern, when a client makes a request, the only response it can receive is “received, I’ll notify you when it’s ready.” It is also possible for the server to validate the request pattern and return 400 error messages if it proves to be an illegal request. The problem is that the client cannot directly retrieve the output data returned by the application’s business logic, which requires a separate request. Alternatively, clients can subscribe to certain messages from the message broker. When a response message is received, the client is immediately notified.
  • Message broker becomes invalid single point If message broker is not configured properly, it can have an impact on the overall architecture. This is when you have to maintain a message broker that you barely know how to use, instead of maintaining a volatile service that you’ve written.

This model is really interesting and provides a lot of flexibility. Using a buffer-like structure between producers and consumers increases the stability of the system if you want a large number of messages to be generated on one side.

Of course, the processing can be slow, but with a buffer, it’s much easier to scale it.

Use direct Socket connection

Let’s move on to a very different approach. We can also use a faster technology that is simpler in structure and does not rely on the HTTP protocol for sending and receiving: sockets.

At first glance, socket-based communication looks similar to the HTTP-based client/server mode, but if you look closely, you’ll notice some differences:

  • For the initiator of the communication, the protocol becomes simpler, so the speed becomes faster. Of course, if you want more reliability, you’ll need to write more code, but there is no lag in HTTP.
  • Communication can be initiated by either party, not only the client. By using a Socket, when you open a channel, it stays in its state until you close it. Think of it as an ongoing conversation in which either party can initiate the conversation, not just the caller.

With that in mind, let’s take a quick look at the pros and cons of this approach:

disadvantages

  • There is no standard out there, socket-based communication seems confusing compared to HTTP because there are no standards like SOAP and REST similar to HTTP. So in effect, the implementer can define the structure of the protocol. This in turn makes it more difficult to create and implement new clients. However, if you are only doing this so that your services can communicate with each other, you are implementing your own protocol.
  • If one service starts generating a lot of information for another service and makes it process that information, the second service may eventually become overloaded and crash. This, by the way, is the problem that the former model solves. In this mode, there is a tiny delay between sending and receiving, which means throughput can be higher, but it also requires the receiver to handle everything fast enough.

advantages

  • It is very lightweight and implements basic Socket communication without installing any tools. This also depends on the language you’re using, but some of these languages, like Node.js with the socket.io module, support communication between two services in just a few lines of code.
  • Support for optimized communication processes because you have a continuously open channel between the two services, both of which can respond immediately to messages when they arrive. Unlike database connection pooling, which uses a reactive approach and is not fast, database connection pooling requests new messages.

Socket – based communication is a very efficient way of communication between services. For example, this approach is used when Redis is deployed in a cluster, so it can automatically detect defective nodes and remove them. This is helped by the speed and low cost of communication (which means that there is little additional latency and very little footprint on network resources).

You can use this pattern if you can control the amount of communication between services and are willing to customize the protocol to suit your situation.

Lightweight event

This approach combines the characteristics of the first two approaches. On the one hand, it provides a way for multiple services to communicate with each other through a message bus, so that asynchronous communication can occur. On the other hand, since it can only send lightweight information over channels, additional information needs to be added to the payload through REST calls to the corresponding service.

This communication mode is useful when you need to control network traffic as much as possible, or when message queues have packet size limits. In that case, it’s best to keep everything as simple as possible and only ask for additional information when needed.

advantages

  • This approach is optimal in both respects, with 80-90% of the data sent through a buffer-like structure and thus also providing the benefits of asynchronous communication. And only a fraction of network traffic is required to communicate over inefficient, standard, API-based methods.
  • If you know that, in most cases, services do not need to add additional information to events, it should be kept to a minimum in order to optimize network traffic and keep the need for message brokers low.
  • By using this pattern, additional details about each event are kept secret and left out of the buffer. This, in turn, destroys the combination you build in situations where you need to define patterns for that information. Keeping the buffer “silent” makes it easier to swap options when migrating or scaling (for example, from RabbitMQ to AWS SQS).

disadvantages

  • You can end up with too many API requests. If you don’t do a lot of research and use this approach on a project that isn’t appropriate, the result is that the API requests are too expensive, and the service response process is delayed. All the HTTP requests sent between services add even more network traffic. If this is your reality, consider switching to a model based entirely on asynchronous communication.
  • Need two types of communication interface Your services need to provide two different ways to communicate with each other. On the one hand, they implement the asynchronous communication model required by message queues, but on the other hand, they also require an API-like interface. Because there are so many differences between the two approaches, it is difficult to maintain.

This is an interesting hybrid mode that requires some coding effort (depending on your need to mix the two approaches).

This can be a good optimization, and you need to make sure that there is only a 10%-20% chance that your use case will need to attach additional messages to its message load, otherwise the benefit is not enough to justify the effort of writing the code.


The best way for two microservices to communicate should be the one that meets your needs. If there are performance, reliability, and security requirements, these requirements and related information are the basis for selecting the best mode.

There is no “one size fits all” model. Even if you love a particular model, as I do, you should realistically choose one based on your specific business scenario.

With that said, we can still discuss the question of “Which one did you like best and why?” You can leave a message, let’s discuss ways to make micro services communicate with each other!

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.