Microservices Architecture

The core idea of microservices is the separation and decoupling of application functions to reduce the complexity of business system implementation. Microservices emphasize the dismantling of application functionality into a set of loosely coupled services, each subject to a single responsibility principle. Microservices architecture solves several inherent problems of traditional monomer architecture: each service can be deployed and delivered independently, greatly improving business agility; Each service can scale and contract independently to meet the challenges of Internet scale.

Microservices architecture returns to decentralized point-to-point invocation, promoting agility and scalability while sacrificing the flexibility that comes from decoupling business logic and service governance logic.

Pain points of existing microservices frameworks

  • Intrusive: In addition to adding dependencies, the ability to integrate the SDK often requires adding a portion of the code, or annotations, or configuration to the business code; There is no clear line between business code and governance code.
  • High upgrade cost: Each upgrade requires the business application to modify the SDK version and perform functional regression testing again
  • Serious version fragmentation: Due to the high upgrade cost, the middleware will not stop the pace of development, and over time, the SDK versions referenced by different online services are not unified and their capabilities are uneven, resulting in difficult unified governance.
  • Middleware evolution difficulty: Due to serious version fragmentation, middleware needs to be compatible with all kinds of old version logic in the code during the process of forward evolution, which moves forward with “shackles” and cannot achieve rapid iteration.
  • Lots of content, high threshold: Spring Cloud is called the whole family bucket of microservices governance, containing dozens of components, large and small, so much content that it often takes years to get familiar with the key components. To use Spring Cloud as a complete governance framework, you need to have a deep understanding of its principles and implementation, otherwise it will be difficult to locate problems.
  • Incomplete governance: Unlike RPC frameworks, Spring Cloud, as a typical governance bucket, is not omnipotent, such as protocol transformation support, multiple authorization mechanism, dynamic request routing, fault injection, gray publishing and other advanced features are not covered.

Service Mesh

Service Mesh is an infrastructure layer that handles communication between services. It is responsible for the complex service topology that makes up modern cloud-native applications to reliably deliver requests. In practice, Service Mesh is typically implemented as an array of lightweight network agents that are deployed with application code without the application being aware of the presence of the agent.

Characteristics of Service Mesh

  • An intermediate layer of communication between applications
  • Lightweight Web proxy
  • The application is not aware
  • Decouple application retry/timeout, monitoring, tracing, and service discovery

To understand the Service Mesh

  • TCP/IP between applications, or microservices, is responsible for network calls, traffic limiting, fuses, and monitoring between services.

  • What is the sidecars

In the Sidecar mode, functions of application programs are divided into separate processes. In the service grid, when a service container is generated, a corresponding companion container is generated to take over all traffic of the service container and manage and control the traffic.

How does Service Mesh work

  • Sidecar routes the service request to the destination address. Based on the parameters in the request, it determines whether to route the service to the production environment, test environment, or staging environment (the service may be deployed in all three environments at the same time), local environment or public cloud environment. All of this routing information can be configured dynamically, globally or individually for certain services.
  • When the sidecar confirms the destination address, it sends traffic to the corresponding service discovery endpoint, service in Kubernetes, which then forwards the service to the back-end instance.
  • Sidecar selects the most responsive instance of all the application instances based on the latency it observed for the most recent request.
  • Sidecar sends the request to the instance, recording the response type and delay data.
  • If the instance hangs, does not respond, or the process does not work, Sidecar will send the request to another instance and retry.
  • If the instance keeps returning error, Sidecar removes the instance from the load balancing pool and periodically retries it later.
  • If a request’s deadline has passed, Sidecar actively flags the request as a failure rather than trying to add load again.
  • SIdecar captures aspects of the above behavior in the form of metric and distributed tracking, which is sent to a centralized metric system.

Advantages of Service Mesh

  • Some common components of the microservices architecture are already out there
  • Use the best tools and programming languages without having to worry about different platform support for software libraries and patterns.

Service grid revolution

  • Decoupling of microservices governance from business logic.

The service grid takes most of the SDK capabilities out of the application and disintegrates them into separate processes, which are deployed in a Sidecar mode. By separating service communication and related management and control functions from business programs and sinking them to the infrastructure layer, the service grid completely decouples them from business systems and enables developers to focus more on business itself.

  • Unified management of heterogeneous systems.

With the development of new technology and turnover, at the same company will often appear in different language, different framework of applications and services, in order to be able to unified control of these services, the previous practice is to each language, each framework to develop a set of complete SDK, maintenance cost is very high, and middleware to the company’s team has brought great challenges. With a service grid, multilingual support is made much easier by sinking the service governance capabilities of principals into the infrastructure. Only need to provide a very lightweight SDK, or even in many cases do not need a separate SDK, can easily achieve multi-language, multi-protocol unified traffic control, monitoring and other requirements.

Service grid has three major advantages over traditional microservice frameworks

  • Observability.

The service grid is a dedicated infrastructure layer through which all communication between services passes, so it has a unique position in the technology stack to provide uniform telemetry metrics at the service invocation level.

  • Flow control.

The Service Mesh provides intelligent routing (blue and green deployment, Canary publishing, and A/B test), timeout retry, fusing, fault injection, and traffic mirroring for services.

  • Security.

To some extent, monolithic architecture applications are protected by their single address space. However, once a single architecture application is decomposed into multiple microservices, the network becomes a significant attack surface. More services mean more web traffic, which means more opportunities for hackers to attack the flow of information. The service grid provides the capability and infrastructure to protect network calls. The security-related benefits of service grid are mainly reflected in the following three core areas: authentication of services, encryption of communication between services, and enforcement of security-related policies.

Istio

Istio profile

Istio is an open source platform that provides behavioral insight and control over the entire service grid and is a complete solution for a service grid. In Sidecar mode, each service container is injected with a companion container, which takes over all traffic processing to achieve non-invasive service governance.

Istio architecture diagram

Istio core functions

The data plane

The data plane is composed of multiple service brokers (via Envoy), while communication between the sidecars between microservices is achieved through policy control and telemetry collection (via Mixer).

Control plane

On the control plane, the Pilot, Citadel, and Galley manage and configure communication between Sidecar agents. Citadel is used for key and certificate management. Pilot distributes authorization policies and security naming information to agents — primarily for service discovery and service configuration (service access rule maintenance). Adapter mechanisms in Pilot can accommodate various service discovery components (Eureka, Consul, Kubenetes), and, best of all, Kubernetes. Galley validation rules (rules for Pilot, Mixer, Citadel configurations).

Istio core competencies

  • Secure inter-service communication through authentication and authorization.
  • Policy layer that supports access control, resource quotas, and resource allocation.
  • Supports load balancing for HTTP, gRPC, WebSocket and TCP communication.
  • Measurement, logging, and tracking of all traffic within the cluster, including the entrances and exits of the cluster.
  • Configure and control inter-service communication through failover, fault injection, and routing rules.