Today I would like to share with you the specific gameplay of Service Mesh, a new generation of microservices architecture. In the microservice architecture prevailing today, as an Internet technology practitioner, the concept of micro-service is believed to be familiar to everyone! As for the micro service framework like Spring Cloud, most Internet companies have built the first generation of micro service system on this basis, so for students who do Java, Spring Cloud micro service system should be very familiar!

This is not to say that other language stacks do not have a framework to build microservices, for example, Go language also has a microservice framework like Go-Micro, but at present, except for the Go language heavy use companies like Toutiao, most of the other Internet companies server language is still Java world! Therefore, the Spring Cloud framework is still the first choice for most companies that have adopted or plan to adopt microservices architectures. But would you think I was making it up if I said that the system was on the verge of becoming obsolete? Because after all, the micro services we play every day are still Spring Cloud!

Like Spring Cloud GateWay, Zuul, Eureka, Consul, Nacos, Feign/Ribbon, Hystrix, Sentinel, Spring Cloud Config, Apollo… Are these awesome development frameworks or service components that cover all aspects of service governance — service registration and discovery, traffic limiting, fuse downgrading, load balancing, service configuration, etc. — becoming obsolete?

That’s hard to accept, though, because these technologies are just beginning to catch on! However, most of the components of the next generation microservice architecture, Service Mesh, are becoming obsolete. This is not to say that these open source components are not technically awesome or worthy of further study, but that the microservice architecture for which they are based is already conceptually different from Service Mesh. This gap exaggerates like the difference between the J-20 and the J-10. This may sound a bit sensational, but from the current micro services technology development trend and practice, this is the trend of history! Next I will analyze and demonstrate this from the theoretical and practical level!

Why enter the Service Mesh era

I exaggerate a little when I say that the microservice system represented by Spring Cloud is inferior to Service Mesh. Why do I say so? Next, let’s review the technical process for building microservices using Spring Cloud!

To build a microservice system, we first need to independently deploy a component service to realize service registration/discovery. Currently, mainstream solutions available generally include Eureka, Consul, Nacos, etc. After service registration/discovery is completed, we write a Java microservice. In order to register the service in the service registry, Generally, the SDK provided by Spring Cloud to support the access of the corresponding registry will be introduced and annotated in the application entry class by @enableDiscoveryClient annotation. After that, the logic in the SDK will perform the service registration action when the application is started and provide the corresponding detection interface to the registry. This enables the connection between microservices and service registries. Similarly, we can register a set of microservices into a service registry in this way!

And what if the services want to call each other? The FeignClient interface is used to call microservices. The underlying logic is that the Ribbon component integrated with Feign gets the service address list of the target service from the registry. The Ribbon then makes load balancing calls based on the service address list. How the connection between the service and registry is valid depends on the collaboration mechanism between the service registry and its SDK.

At a higher level, calls between services in addition to load balancing, Fusion-limiting is also required. Fusion-limiting at microservice portals can be achieved by deploying service GateWay components (such as Zuul/Spring Cloud GateWay), and fusion-limiting between internal services can be achieved by integrating Hystrix or Sentinel components. This is done either locally or remotely in the client configuration center.

The above process is basically the general process of using Spring Cloud to build microservice system! If we think about this process carefully, we will find that during the construction of the microservice architecture, most of the logic related to service governance is coupled to the specific microservice application in the form of SDK! SDK needs to be introduced for service registration, SDK needs to be introduced for service invocation, and SDK also needs to be introduced for service fusing limiting. In addition, in order to ensure the normal operation of this system, we need to maintain additional basic services such as service registry and service gateway. What disadvantages can such a structure lead to? The specific points are as follows:

1. Too many frameworks/SDKS make subsequent upgrade and maintenance difficult

In this system, the logic related to service governance is embedded in microservices in a way that SDK code depends on. If we want to upgrade the SDK version of the service registry, or the version of the fuse limiting components Hystrix or Sentinel, then the number of microservices that need to be upgraded could be hundreds or thousands. And because these components are bound to business applications, it is necessary to be cautious about whether the business stability will be affected during the upgrade process, so the difficulty of upgrading the SDK can be imagined!

2. High maintenance cost of multi-language micro-service SDK

If you build microservices that also support microservices written in languages like Go, Python, or other languages, do you have to maintain several separate SDKS related to microservices governance? So support for multilingual microservices becomes an issue in this architecture!

3. Service governance policies are difficult to control uniformly

The microservice system built based on this system is relatively decentralized in the policy management related to service governance, such as fusing, limiting traffic and load balancing. Some people may write their local configuration files, some may hardcode them into code logic, and some may configure them to the remote configuration center. In short, the logic of service governance strategy is controlled by the corresponding developers themselves, so it is difficult to form a unified control system!

4. Service governance logic is embedded in business applications and occupies business service resources

In this micro-service system, the logic related to service governance is parasitic in the micro-service application process, which will occupy precious business server resources and affect the application performance!

5. Maintenance costs of additional service governance components

Whether it is the service registry, or the service gateway, these service governance components besides the microservice application itself, we need to maintain in the way of middleware basic services, requiring additional manpower, additional server costs!

These are the disadvantages of traditional microservice systems represented by Spring Cloud. If I say that under Service Mesh, these problems are no longer a problem, and developers do not even need to pay attention to them. We only need to write a common Spring Boot service, and there is no need to introduce service registration SDK and fusing flow limiting SDK components. In short, if you write a common service, you can realize most of the service governance functions supported by Spring Cloud microservice system before, would you believe it? Do you think it’s going to be the same as writing a single app?

No matter what you think, that’s what Service Mesh does! The goal of Service Mesh is to reduce microservices governance to a business-neutral infrastructure. ** From this point of view, if we don’t learn the Service Mesh carefully, we will become less and less intelligent in the future, because Spring Cloud at least allows us to perceive the existence of microservices, and in the Service Mesh, microservices governance system as part of the infrastructure. More and more transparent to ordinary developers!

What is the solution to Service Mesh

As mentioned earlier, the goal of Service Mesh is to submerge the microservice governance system into a business-neutral infrastructure. How to understand this sentence? In fact, the Service Mesh microservice governance technology is not born out of thin air, but in the context that the container choreography technology represented by Kubernetes has gradually become the mainstream basic environment of software operation. And the result of natural iterative development of technology under the condition that the disadvantages of traditional microservice technology system represented by Spring Cloud framework gradually appear. In short, is a little everything have, only owe the feeling of the east wind!

Therefore, we can see that most of the existing Service Mesh solutions are deeply integrated with Kubernetes, such as **Istio! ** Let’s take a look at how the core logic of microservice governance is implemented in the Service Mesh (take Istio+Envoy as an example)!

To understand the implementation of microservice governance logic in the Service Mesh, we have to look at the following diagram, which is a classic concept of Service grid but difficult to understand at first:

If you are familiar with the concept of Service Mesh before, this diagram is probably familiar. The green square represents a normally deployed microservice, and the blue square represents a network agent, commonly known as a SideCar. In the Service Mesh architecture, each micro-service deployed needs to deploy a corresponding proxy Service, all interactions with the micro-service itself through SideCar proxy, and between sidecArs will form a mesh-like interaction link, which is the name of the Service grid!

In the Service Mesh, when we deploy a Service behind Kubernetes, The Service Mesh component (such as Istio) installed in Kubernetes will automatically start a corresponding proxy process (such as ISTIo-proxy) in the same Pod of the microservice. This nanny-type proxy process will replace microservice itself to realize the microservice governance functions such as service registration, load balancing, fusing and flow limiting that need to be completed by microservice itself in the Spring Cloud system. Moreover, these agent processes do not work alone. Instead, they are connected to the Service Mesh control component through protocols such as xDS.

This leads to two key concepts in the Service Mesh architecture: the control side and the data side. The Sidecar shown earlier (e.g. Istio-proxy) is the data side where information related to microservice governance logic is stored, and the control side is the central control component of the Service Mesh (e.g., Pilot component in ISTIO). Control surface can be through THE xDS protocol (specifically divided into LDS, CDS…) Deliver various rules related to service governance to the data plane, such as traffic limiting rules, routing rules, service node update information, and so on.

This design approach is the core design logic of the Service Mesh: Sidecar is used to proxy microservices for Service governance logic (data side), the control side is used to sense changes in the external environment, and xDS protocol is used to support centralized management and distribution of various micro-service governance policies and rules. The control surface and data surface here will be integrated into the infrastructure environment like Kubernetes, for the development of common microservices, the r&d personnel need to do is to deploy an application in a choreographer way into the K8S cluster can! All the logic related to microservices governance is done by the agent data surface and the control surface in collaboration.

Here we use the architecture diagram of Istio, Service Mesh’s most famous open source solution, to illustrate the above logic:

Service registration discovery can be realized by listening to the changes of Kubernetes Pod directly by using the internal discovery mechanism of Kubernetes, as shown in the following diagram:

The logic related to microservice governance, taking Istio as an example, is roughly as follows:

Administrators configure governance rules through Pilot and deliver governance rules to envoys through xDS protocol. Envoys obtain microservice governance rules from Pilot and perform traffic limiting, routing and other microservice governance logic in accordance with the rules.

Istio+Envoy Service Mesh architecture gameplay

We have introduced the core concepts and flow logic of the Service Mesh microservice architecture from a conceptual level. If you have played with the Service Mesh architecture, it is easy to understand. However, the above concepts may not be easy to understand without practicing them, especially if you don’t have a basic understanding of Kubernetes. For example, you may be struggling to imagine “how on earth should I deploy that so-called Sidecar agent?” “, “How to develop services under Service Mesh?” ** etc such questions!

If I don’t write it here, then this article will be like most other articles that introduce Service Mesh. It will either be an introduction to the principles of Service Mesh, a rehash of concepts, or an article with an example. But most of them are also demos based on Istio’s official Demo!

For students who have developed Spring Cloud micro-service applications, it is not very easy to understand! So for the rest of my gameplay practice, I’m going to do it as close as possible to the actual development scenario, from the perspective of a developer who has developed microservices using the Spring Cloud framework, To introduce how to use popular Java framework (such as Spring Boot) to develop microservice applications based on Service Mesh.

The specific process and steps are as follows:

01 Preparing the K8S environment and installing Istio

To implement the Service Mesh microservice architecture, a basic prerequisite is a fully functional Kubernetes environment. The K8S environment I used here is to install a Linux VIRTUAL machine on the development book and deploy a Kubernetes single-node cluster with only a Master node on it. In addition, because Istio requires Kubernetes version, the K8S version used here is V1.18.6.

I’m going to assume that you’ve got the Kubernetes environment ready and start installing Istio. Istio-1.8.4 is the chosen version.

1) Download the Istio distribution package

Because the download script provided by the government is slow, you can directly find the corresponding IStio release version on Github and run the wget command to download it to the specified directory on the host (k8S cluster can be connected properly) :

Wget HTTP: / / https://github.com/istio/istio/releases/download/1.8.4/istio-1.8.4-linux-amd64.tar.gzCopy the code

After the download is successful, decompress the installation package:

The tar - ZXVF istio 1.8.4 - Linux - amd64. Tar. GzCopy the code

Go to the decompressed installation package directory:

CD istio 1.8.4 /Copy the code

2) Add the IStioctl client to the system executable path

The istioctl command is required for isTIO installation. Add the command to the system executable path as follows:

export PATH=$PWD/bin:$PATH
Copy the code

3. Run the istio command

The istioctl command is used to execute the installation command as follows:

istioctl install --set profile=demo
Copy the code

Here “–set profile=demo” means install an ISTIO test environment! The following information is displayed after the successful installation:

Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/v1.8/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for details. This will install the Istio 1.8.4 Demo profile with ["Istio Core ""Istiod" "Ingress Gateways" "Egress Gateways "] components into the cluster. Proceed? (Y /N) y stocking Istio core installed stocking Istiod installed stocking Ingress gateways installed stocking Egress gateways installed stocking Installation completeCopy the code

If the installation is successful, you can run the kubectl command to check whether isTIo-related components are installed in Kubernetes:

kubectl get svc -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-egressgateway ClusterIP 10.101.134.226 < None > 80/TCP,443/TCP,15443/TCP 8m12s istio-ingressgateway LoadBalancer 10.96.167.106 <pending> 15021:31076 / TCP, 80:31032 / TCP, 443:31438 / TCP, 31400:32751 / TCP, 15443:31411 / TCP 8 m11s istiod ClusterIP 10.102.112.111 < none > 15010/TCP,15012/TCP,443/TCP,15014/TCPCopy the code

The core components of ISTIO, isTIod, ingressGateway and egressGateway have been successfully run in the Kuberntes cluster as Service resources.

4) K8S default namespace enables automatic injection Envoy Sidecar

This is a key step. If our microservice application is to be deployed in the default namespace of K8S in the future, then we need to enable the Sidecar auto-injection function in this space before installing ISTIO. This is the key setting we mentioned earlier that k8S automatically starts a proxy process in the same Pod every time we start a microservice application!

Specific commands are as follows:

$ kubectl label namespace default istio-injection=enabled
namespace/default labeled
Copy the code

5) Istio observable deployment

Kiali is a service grid based Istio management console that provides some data dashboards and observability, as well as allowing us to manipulate the grid configuration. Use the following command to quickly deploy a Kiali for demonstration purposes:

$ kubectl apply -f samples/addons serviceaccount/grafana created configmap/grafana created service/grafana created deployment.apps/grafana created configmap/istio-grafana-dashboards created configmap/istio-services-grafana-dashboards created deployment.apps/jaeger created service/tracing created service/zipkin created service/jaeger-collector created Warning: apiextensions. K8s. IO/v1beta1 CustomResourceDefinition is deprecated in v1.16 +, unavailable in v1.22 +; use apiextensions.k8s.io/v1 CustomResourceDefinition customresourcedefinition.apiextensions.k8s.io/monitoringdashboards.monitoring.kiali.io created serviceaccount/kiali created configmap/kiali created clusterrole.rbac.authorization.k8s.io/kiali-viewer created clusterrole.rbac.authorization.k8s.io/kiali created clusterrolebinding.rbac.authorization.k8s.io/kiali created role.rbac.authorization.k8s.io/kiali-controlplane created rolebinding.rbac.authorization.k8s.io/kiali-controlplane created service/kiali created deployment.apps/kiali created serviceaccount/prometheus created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus  created service/prometheus created deployment.apps/prometheus created ....Copy the code

Promethues, Grafana, Zipkin and other indicators and link collection services will be installed and deployed. A large number of components are installed and resources are consumed. If the cluster resources are insufficient, the startup may be slow. If the deployment succeeds, run the following command to view the Pod status:

# kubectl get pod -n istio-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES Grafana-94f5bf75b-4mkcn 1/1 Running 1 30h 10.32.0.11 kubernetes <none> <none> istio-egressgateway-7f79bc776-w6rqn 1/1 Running 3 30h 10.32.0.3 kubernetes <none> <none> istio-ingressgateway-74CCb8977C-gnhbb 1/1 Running 2 30h 10.32.0.8 Kubernetes <none> <none> istiod-5d4dbbb8fc-lhgsj 1/1 Running 2 30h 10.32.0.5 kubernetes <none> <none> Jaeger-5c7675974-4ch8v 1/1 Running 3 30h 10.32.0.13 kubernetes <none> <none> Kiali-667b888c56-8XM6r 1/1 Running 3 30h 10.32.0.13 kubernetes <none> <none> Kiali-667b888c56-8XM6r 1/1 Running 3 30h 10.32.0.6 kubernetes <none> <none> Prometheus -7d76687994- BHSMJ 2/2 Running 7 30h 10.32.0.14 kubernetes <none> <none>Copy the code

Because we did not enable automatic Sidecar injection in isTIo-system space (its label IStio-injection =disabled) when we installed ISTIO. In order to access the control panels of Kiali, Prometheus, Granfana, Tracing outside the K8S cluster (which together form the observable system of the Service Mesh), ports can be exposed via nodePort.

NodePort Kiali NodePort

Export the deployed Kiali Service file to a directory on the host, for example:

kubectl get svc -n istio-system kiali -o yaml > kiali-nodeport.yaml
Copy the code

Then edit the exported file and delete annotation, resourceVersion, selfFlink, UID and other information under metadata. Set ClusterIP to NodePort and specify NodePort in the value of type under spec. Delete the status field. Details are as follows:

spec:
  ...
  ports:
  - name: http
    nodePort: 31001
   ...
  type: NodePort
Copy the code

After editing, run the following command:

kubectl apply -f kiali-nodeport.yaml
Copy the code

To view the service port, run the following command:

Kubectl get svc-n istio-system kiali NAME TYPE cluster-ip external-ip PORT(S) AGE kiali NodePort 10.100.214.196 <none> 20001:31001/TCP,9090:30995/TCP 41hCopy the code

Now you can access the Kiali control panel through k8S external CLUSTER IP+31001 port, as shown in the picture below:

Similar to Kiali, we can also access the observable interface from outside the K8S cluster by obtaining the release files of modified deployed Services such as Promethues, Granfana, Tracing, Zipkin, etc., and by setting the NodePort port! Such as:

#Prometheus kubectl get svc -n istio-system prometheus -o yaml > prometheus-nodeport.yaml kubectl apply -f prometheus-nodeport.yaml #Granfana kubectl get svc -n istio-system grafana -o yaml > grafana-nodeport.yaml kubectl apply -f grafana-nodeport.yaml #Jaeger kubectl get SVC -n istio-system tracing -o yaml > tracing-nodeport.yaml kubectl  apply -f tracing-nodeport.yaml ...Copy the code

The visit effect of Granfana is shown as follows:

02 Spring Boot microservice development

After the previous steps, we have completed the construction of isTIo-based Service Mesh microservice system from the perspective of infrastructure environment. If we compare the previous microservice development experience based on Spring Cloud framework, how should we develop microservice applications under Istio system?

Next, we demonstrate how to develop a Service Mesh microservice application based on Istio through a practical application example. The Service link is as follows:

The links shown above are described as follows:

1) In order to fully demonstrate the development process of micro-services under the framework of Service Mesh, we define three micro-services, among which micro-API Service is an API Service oriented to external client access and provides Http protocol access;

2) Micro-API and Micro-Order conduct internal service invocation based on micro-service registration discovery mechanism, using Http protocol;

3) Micro-order and Micro-Pay also conduct internal micro-service calls based on micro-service registration discovery mechanism. To demonstrate more RESEARCH and development scenarios, Grpc protocol is adopted for communication between these two micro-services.

After planning the microservices application architecture, you can develop it in detail! Specific service code level construction, there is no need to do any introduction of microservice framework, you just need to build a few basic Spring Boot applications through Spring Boot, no need to introduce any service governance related components, just a simple and pure Spring Boot application, There is no need to connect to a registry, and there is no need to introduce components like OpenFeign, Hystrix, Sentinel, etc.

The specific code structure is shown in the figure below:

You can see that there is no service discovery annotation in the entry class of the application! Here are the highlights:

First of all, in the previous Spring Cloud-based microservice invocation, if the service invocation is carried out through Http protocol, it is generally implemented by introducing OpenFeign. The service provider provides a FeignClient interface definition, and the caller code can be directly introduced. The specific operation logic, The Ribbon component integrated with OpenFeign retrieves the target service address list from the registry and makes load balancing calls.

However, the logic of load balancing and Service discovery in the Service Mesh architecture is already done by Sidecar in Istio, so we can’t introduce OpenFeign here as before! So what to do? In order to continue the previous programming style and the simplicity of service communication code, we need to customize a framework similar to OpenFeign, which can be modified based on The source code of OpenFeign, but remove the logic related to service governance such as service load balancing and fusing flow limiting. Make it a framework for simply making Http service calls.

At present, there is no such official adaptation framework on the market, so some companies that implement the Service Mesh architecture transform and encapsulate the Spring Cloud microservice system independently in order to be compatible with the migration. Here, I found a personal modification code from Github and modified the adaptation. The test is ok! Its capabilities are described as follows:

1. Support fast call between services under ISTIO service grid system (experience is similar to the original Spring Cloud Feign);

2. Support multi-environment configuration. For example, the call address of local environment micro-service can be configured as local, and the service in Kubernetes cluster in other environment can be configured by default;

3, support link tracking, default transparent transmission of the following Header, can automatically support Jaeger, Zipkin link tracking, etc., as follows:

`"x-request-id", "x-b3-traceid", "x-b3-spanid", "x-b3-sampled", "x-b3-flags", "x-b3-parentspanid","x-ot-span-context", "x-datadog-trace-id", "x-datadog-parent-id", "x-datadog-sampled", "end-user", "user-agent"`
Copy the code

The final actual programming style looks like this:

@fakeclient (name = "micro-order") @requestMapping ("/order") public interface OrderServiceClient {/** * create order */ @PostMapping("/create") ResponseResult<CreateOrderBO> create(@RequestBody CreateOrderDTO createOrderDTO); }Copy the code

Here is the interface call code provided by micro-Order microservice to Micro-API. Micro-api service can be introduced to call, which is very similar to the previous development mode of Spring Cloud microservice in terms of programming style. You just don’t see any logic related to service registration discovery so far!

Secondly, the core logic of service governance is completed by Istio and Sidecar agent. After application development, it is only necessary to write K8S deployment file to deploy the service into Kubernetes cluster installed with Istio environment, and the process of deploying the Java service to K8S cluster. Involves **”Docker image package -> image warehouse release -> K8S deployment pull image “** this set of CI/CD operation process

Let’s focus on micro-API and Micro-Order’s K8S release files to see what makes them special:

Micro-order service K8S distribution file (micro-order.yaml):

apiVersion: v1 kind: Service metadata: name: micro-order labels: app: micro-order service: micro-order spec: type: ClusterIP ports: - name: http port: 80 targetPort: 9091 selector: app: micro-order --- apiVersion: apps/v1 kind: Deployment metadata: name: micro-order-v1 labels: app: micro-order version: v1 spec: replicas: 2 selector: matchLabels: app: micro-order version: v1 template: metadata: labels: app: micro-order version: v1 spec: containers: - name: Micro-order image: 10.211.55.2:8080/micro-service/micro-order: 1.0-snapshot imagePullPolicy: Always tty: true ports: - name: http protocol: TCP containerPort: 19091Copy the code

As shown above, this is the K8S Deployment file for the Micro-Order Service, which is the Service resource and Deployment orchestration resource that normally defines the application; For the purpose of demonstrating load-balancing calls to the service, I specifically deployed the application in two copies!

Next, look at the K8S distribution file (micro-api.yaml) of the caller micro-API service:

apiVersion: v1 kind: Service metadata: name: micro-api spec: type: ClusterIP ports: - name: http port: 19090 targetPort: 9090 selector: app: micro-api --- apiVersion: apps/v1 kind: Deployment metadata: name: micro-api spec: replicas: 1 selector: matchLabels: app: micro-api template: metadata: labels: app: micro-api spec: Containers: -name: micro-API image: 10.211.55.2:8080/micro-service/ micro-API: 1.0-snapshot imagePullPolicy: Always tty: true ports: - name: http protocol: TCP containerPort: 19090Copy the code

As with Micro-Order, it only defines the k8S normal publishing resources of the application, and there is no indication of how the Micro-API calls the Micro-Order service. Next we publish the service to the K8S cluster using this file (note that the default namespace for Sidecar auto-injection is turned on)!

After the deployment is successful, check the Pods information as follows:

# kubectl get pods 
NAME                                      READY   STATUS    RESTARTS   AGE
micro-api-6455654996-57t4z                2/2     Running   4          28h
micro-order-v1-84ddc57444-dng2k           2/2     Running   3          23h
micro-order-v1-84ddc57444-zpmjl           2/2     Running   4          28h
Copy the code

As shown above, you can see that one Micro-API Pod and two Micro-Order Pods are already up and running! But if you haven’t noticed, the READY field in each Pod shows 2/2, which means that two containers are started in each Pod, one is the microservice application itself, and the other is the Sidecar agent process that is automatically injected. To understand this logic, we can use the following command to view the Pod description:

# kubectl describe pod micro-api-6455654996-57t4z Name: micro-api-6455654996-57t4z ... IPs: IP: 10.32.0.10 Controlled By: ReplicaSet/ Micro-API-6455654996 Init Containers: IStio-init: Container ID: docker://eb0298bc8456f5f1336dfe2e8baab6035fccce898955469353da445aceab15cb Image: Docker. IO/istio proxyv2:1.8.4 Image ID: docker-pullable://istio/proxyv2@sha256:6a4ac67c1a74f95d3b307a77ad87e3abb4fcd64ddffe707f99a4458f39d9ce85 .... Containers: micro-api: Container ID: docker://ebb45c5fa826f78c354877fc0a4c07d6b2fae4c6304e15729268b1cc6a69abca Image: 10.211.55.2:8080 / micro - service/micro - API: 1.0 the SNAPSHOT Image ID: Docker pullable: / / 10.211.55.2:8080 / micro - service/micro - API @ sha256: f303016a604f30b99df738cbb61f89ffc166ba96d59785172c7b76 9c1c75a18d istio-proxy: Container ID: docker://bba9dc648b9e1a058e9c14b0635e0872079ed3fe7d55e34ac90ae03c5e5f3a66 Image: Docker. IO/istio proxyv2:1.8.4 Image ID: Docker - pullable: / / istio/proxyv2 @ sha256:6 a4ac67c1a74f95d3b307a77ad87e3abb4fcd64ddffe707f99a4458f39d9ce85 omitted here...Copy the code

You can see that in the namespace with Sidecar auto-injection enabled, Istio will automatically initiate a corresponding ISTIo-proxy process (Envoy) on each Pod launched by the Sidecar agent by initializing the Containers. By now you should have a real sense of what Sidecar really is.

03 Deploying the Istio microservice gateway

In the previous steps, we have completed the development of the microservice application and deployed it to the K8S cluster. The Sidecar agent has been started normally. How to access it?

In general, if you want to access services in the Kubernetes cluster, you can expose the access ports outside the K8S cluster through NodePort port mapping and Ingress. However, a new model is adopted in Istio — Istio Gateway to replace the Ingress resource type in Kubernetes. In Istio microservices, all external traffic should come in through the Gateway and be forwarded to the corresponding internal microservices by the Gateway!

Based on the unified control plane configuration, Istio can centrally manage the traffic access rules of the Gateway Gateway to control external traffic access. When Istio was deployed earlier, the “ISTIo-ingressGateway” entry traffic gateway was already running in a K8S cluster as part of the Istio architecture, as follows:

# kubectl get SVC - n istio - system | grep istio - ingressgateway istio - ingressgateway LoadBalancer 10.100.69.24 < pending > 15021:31158/TCP,80:32277/TCP,443:30508/TCP,31400:30905/TCP,15443:30595/TCP 46hCopy the code

Next we need to set up the logic to access micro-API microservices through this gateway and write the gateway deployment file (micro-gateway.yaml) :

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: micro-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: micro-gateway
spec:
  hosts:
    - "*"
  gateways:
    - micro-gateway
  http:
    - match:
        - uri:
            exact: /api/order/create
      route:
        - destination:
            host: micro-api
            port:
              number: 19090
Copy the code

As shown above, the deployment file defines routing matching rules so that all requests to the/API /order/create address are forwarded to port 19090 of the Micro-API service!

After configuring the gateway routing and forwarding rules, we tried to access the micro service interface by accessing istio-ingressGateway, the specific link is :” External call -> istio-ingressGateway -> micro-API ->micro-order”.

However, for istio-ingressGateway access, as it is also an internal POD of K8S, configure a NodePort port mapping for the time being. You can run the following command to perform this operation:

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service Istio-ingressgateway -o jsonPath ='{.spec.ports[?(@.name==" HTTPS ")].nodeport}') export INGRESS_HOST=127.0.0.1 export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORTCopy the code

The HTTP/HTTPS NodePort access port of istio-ingressgateway is configured. After the configuration is complete, check the NodePort port mapping.

# kubectl get SVC - n istio - system | grep istio - ingressgateway istio - ingressgateway LoadBalancer 10.100.69.24 < pending > 15021:31158/TCP,80:32277/TCP,443:30508/TCP,31400:30905/TCP,15443:30595/TCP 46hCopy the code

You can see that istio-ingressGateway can be accessed through HTTP port 32277 and HTTPS port 30508. The url is http://{k8s cluster IP address}:32277/ interface URL. Specific visit effects are as follows:

From the call result, we can see that the Service Mesh microservice system based on Istio has run successfully. In terms of the programming experience, it’s almost impossible to recognize the existence of microservices! Anyway, the confused service will be tuned, service discovery how to do? How does load balancing work? While these questions don’t need your attention, they may also cause you confusion! Next we call log simple awareness to downgrade the logic through the link!

04 Mechanism Analysis of Link Call Logs

After the Postman call returns the result, we take a look at the service log that the link passes through! The istio-ingressGateway container logs are as follows:

# kubectl logs istio-ingressgateway-74ccb8977c-gnhbb -n istio-system ... 2021-03-18T08:02:30.863243z info XdsProxy Envoy Stream Established 2021-03-18T08:02:30.865335z info XdsProxy Envoy Stream established 2021-03-18T08:02:30.865335z info connecting to upstream XDS server: [2021-03-18T08:14:00.224Z] "POST/API/ORDER /create HTTP/1.1" 200 -" -" 66 75 7551 6144 PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: Outbound | 19090 | | micro - API. Default. SVC. Cluster. The local 10.32.0.8:57460 10.32.0.8:10.32.0.1:8080-33229 [2021-03-18T08:14:32.465z] "POST/API/ORDER /create HTTP/1.1" 200 -" -" 66 75 3608 3599 "10.32.0.1" "PostmanRuntime / 7.26.8" "e8 ccf56049-88-9170 - a1f5-93 affbf6e098" "10.211.55.12:32277" "10.32.0.10:9090" Outbound | 19090 | | micro - API. Default. SVC. Cluster. The local 10.32.0.8:57460 10.32.0.8:10.32.0.1:8080-33229 [2021-03-18T08:16:37.242z] "POST/API/ORDER/CREATE HTTP/1.1" 200 -" -" 66 75 68 67 "10.32.0.1" "PostmanRuntime/7.26.8" "98 ecbd52-91 c6 a0-97-9 ce6 - d8f6094560e0" "10.211.55.12:32277" "10.32.0.10:9090" Outbound | 19090 | | micro - API. Default. SVC. Cluster. The local 10.32.0.8:57460 10.32.0.8:10.32.0.1:8080-33229Copy the code

As shown above, from the istio-ingressGateway gateway log, we can see that the access of/API/ORDER /create interface is indeed forwarded to the POD IP where micro-API resides, which conforms to the gateway routing rules previously configured.

Next let’s look at the log of micro-API’s IStio-proxy:

# kubectl logs micro-api-6455654996-57t4z istio-proxy ... [2021-03-18T08:41:10.750z] "POST/ORDER/CREATE HTTP/1.1" 200 -" -" 49 75 19 18 "-" PostmanRuntime/7.26.8" "886390 ea - e881-9 c45 - e0fc4733680 b859-1" "micro - order" "10.32.0.7:9091" outbound | 80 | | micro - order. Default. SVC. Cluster. The local 10.32.0.10:54552 10.99.132.246:80 10.32.0.10:39452-default [2021-03-18T08:41:10.695Z] "POST/API /order/create PostmanRuntime/ HTTP/1.1" 200 -" -" 66 75 104 103 "10.32.0.1" "886390EA-e881-9c45-b859-1e0fc4733680" "10.211.55.12:32277" "127.0.0.1:9090" the inbound | 9090 | | 127.0.0.1:52782 10.32.0.10:9090 10.32.0.1:0 outbound_.19090_._.micro-api.default.svc.cluster.local default ... [2021-03-18T08:47:22.215z] "POST/ORDER/CREATE HTTP/1.1" 200 -" -" 49 75 78 70 "-" PostmanRuntime/7.26.8" "9 c4 bbd3a3c - 86-943 - f - 999 - a - bc9a1dc02c35" "micro - order" "10.32.0.9:9091" outbound | 80 | | micro - order. Default. SVC. Cluster. The local 10.32.0.10:54326 10.99.132.246:80 10.32.0.10:44338-default [2021-03-18T08:47:22.173z] "POST/API /order/create PostmanRuntime: PostmanRuntime/ PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: PostmanRuntime: "10.211.55.12:32277" "127.0.0.1:9090" the inbound | 9090 | | 127.0.0.1:57672 10.32.0.10:9090 10.32.0.1:0 outbound_.19090_._.micro-api.default.svc.cluster.local defaultCopy the code

Here we visit the interface situation twice and see that micro-API’s Sidecar proxy calls two different instances of the Micro-Order service in a load-balancing manner (underlined IP)!

Istio-proxy log for accessing micro-order:

# kubectl logs micro-order-v1-84ddc57444-dng2k istio-proxy ... 2021-03-18T08:33:06.146178z info XdsProxy Envoy ADS Stream Established 2021-03-18T08:33:06.146458z info XdsProxy Envoy Envoy STREAM Established 2021-03-18T08:33:06.146458z info connecting to upstream XDS server: [2021-03-18T08:34:59.055Z] "POST /order/create HTTP/1.1" 200 -" -" 49 75 8621 6923 "- "PostmanRuntime / 7.26.8" "b1685670-9 - a915 e54-9970-5 c5dd18debc8" "micro - order" "127.0.0.1:9091" the inbound | 9091 | | 10.32.0.10 127.0.0.1:36420 10.32.0.7:9091:54552 outbound_. 80 _ _. Micro - order. Default. SVC. Cluster. The local default [2021-03-18T08:41:10.751z] "POST/ORDER/CREATE HTTP/1.1" 200 -" -" 49 75 17 16 "-" PostmanRuntime/7.26.8" "886390 ea - e881-9 c45 - e0fc4733680 b859-1" "micro - order" "127.0.0.1:9091" the inbound | 9091 | | 127.0.0.1:41398 10.32.0.7:9091 10.32.0.10:54552 outbound_. 80 _. _. Micro - order. Default. SVC. Cluster. The local default...Copy the code

As you can see, the request is forwarded to the specific Micro-Order instance via the Micro-Order istio-Proxy proxy!

Through the analysis of the above logs, although the detailed principle may be questionable, at least one conclusion can be drawn, that is, in Istio’s Service Mesh microservice architecture, the forwarding and routing logic of services are indeed done by Sidecar agents. And you can see from the log that the Envoy agent is constantly connected to the control side service Istiod and updating the service governance rules via xDS protocol!

Afterword.

Based on the general principle of Service Mesh, this paper demonstrates how to develop a set of microservice system based on Service Mesh architecture with a practical development case. Now you can get started with Service Mesh. It also makes up for the lack of Service Mesh practice articles on the network.

However, it has to be said that Service Mesh greatly simplifies the cost of developing micro-service applications. Service Mesh, on the other hand, submerges the microservice governance system as part of the infrastructure, while also increasing the demands on Devops engineers! After all, to play the Service Mesh architecture well, you need not only development skills, but also a deep understanding of the Service Mesh architecture and the source code of the framework. In addition, you need to be very familiar with Kubernetes infrastructure!

In short, Service Mesh is advanced, but it is risky to introduce it into production when the team does not have the skills and knowledge to do so. So this is just the beginning. In the future, I will continue to share the practice and principles of Service Mesh and Istio. If you are interested, please keep watching!

Write in the last

Welcome to pay attention to my public number [calm as code], massive Java related articles, learning materials will be updated in it, sorting out the data will be placed in it.

If you think it’s written well, click a “like” and add a follow! Point attention, do not get lost, continue to update!!