Will Service Mesh be the dominant player in the microservice ecosystem this year? Judging by the trend, many enterprises are moving the technology/tools of micro-service complexity into their IT Arsenal.

What is a Service Mesh?

As defined by Linkerd CEO William Morgan, Service Mesh is an infrastructure layer that handles communication between services, and is used to achieve reliable request delivery in the complex Service topology of cloud native applications. In practice, a Service Mesh is usually a set of lightweight network agents that are deployed with the application but are transparent to the application.

The Service Mesh is different from the traditional infrastructure layer in that it forms a distributed interconnect proxy network. It is deployed on both sides of the Service in the form of sidecar. The Service is not aware of the proxy, and all communication between services is routed by the proxy.

Why do WE need a Service Mesh?

Smart Endpoint and Dumb Pipes is a core concept used by microservices architecture to integrate services. This concept has changed the past bloated centralized ESB (Enterprise Service Bus). It is undoubtedly a step in the right direction, but it also presents some difficult problems. And how to handle the degree of weight and size of service? How should we handle the complexity of interactions between services in microservice systems? Inside or outside the service? If internal, how is the business logic handled, or should it be more infrastructure related? If external, how do you avoid repeating the ESB’s mistakes?

Without further ado, let’s take a look at some of the issues that need to be addressed when handling communication between services:

  • Service discovery
  • Load balancing
  • routing
  • Flow control
  • Communication reliability
  • The elastic
  • security
  • Monitoring/Logging

These seem to be commonplace in any distributed system that needs to deal with a network, but the difference is that these problems are amplified when the number of microservices involved increases exponentially.

One widely used solution is to leverage API gateways to handle requests outside and between services, providing services such as service discovery, routing, monitoring, traffic control, and so on.

API gateways, however, have a fatal flaw in that they are prone to single points of failure and can become bloated when poorly practiced. On the other hand, THE API gateway is user-oriented at its core, meaning that it can solve the problem of traffic from users to microservices, but not all of them, and what we need is a complete solution, or at least some solutions and tools that complement the API gateway.

Another option is to deal with reliability, monitoring, flow control, and so on at the lower level of the network stack (4/3). The problem with this option is that it is easier to satisfy application layer problems at lower levels of operation. In lenovo’s end-to-end theory, the concerns we mentioned above are really focused on the application layer and can only be successfully implemented at the application layer.

Early ADOPters of SOA/ microservices like Netflix and Twitter address these issues by building internal libraries that are then made available to all services. The problem with this approach is that it is extremely difficult to scale libraries to hundreds or thousands of microservices, and these libraries are relatively “fragile”, making it difficult to guarantee that they will fit all the technology stack options.

In some ways, the Service Mesh is similar to these libraries, but the Service Mesh is a separate process adjacent to the Service. The service connects to the proxy, which in turn communicates with other proxies (HTTP1.1/2, GRPC). They are relatively independent processes that are distributed and run at or below the application layer, thus addressing the drawbacks of both solutions.

Service Mesh architecture

The Service Mesh consists of a Data plane in which all services communicate through a Sidecar agent. The grid also contains a control plane, which connects all independent Sidecar agents to a distributed network. And the setup grid also includes a control plane — it connects all independent Sidecar agents to a distributed network and sets policies specified by the Data Plane.

Control Plane Defines policies for service discovery, routing, and traffic Control. These policies can be global or circumscribed. The Data Plane is responsible for applying and enforcing these policies during communication.

The last

To sum up, Service Mesh is “the product of time”. Container technologies such as Docker and Kubernetes directly promote the need for Service Mesh, making it easy to deploy and manage complex systems.

It is not clear how Service Mesh will evolve in the future, whether it will become a standard like TCP/IP, or whether different tools and platforms will become unique. But one thing is for sure: Service Mesh’s value to the microservice ecosystem is hard to ignore. It can simplify and efficiently Service governance, so why not?

https://www.shantala.io/service-mesh-for-microservices/


reading

  • technologyRainbond ServiceMesh Microservice Architecture – Open Source PaaS Rainbond 2018/05/15

Open source PaaSRainbondNative support Service Mesh, please register online experiencePublic clouds(7 days free for new users)