By Rinor Maloku


Translator: Yin Longfei


Proofread: Sun Haizhou


Original: https://medium.com/google-cloud/back-to-microservices-with-istio-p1-827c872daa53



Istio is an open source project developed by the Google, IBM, and Lyft teams that provides microservices-based solutions to application complexity, to name just a few:

  • Traffic management: timeout, retry, load balancing,

  • Security: End-user authentication and authorization,

  • Observability: tracking, monitoring, and recording.

All of this can be addressed in the application layer, but your services are no longer “micro,” and all the extra work of implementing this is a strain on the company’s resources relative to the resources that provide business value. Let’s take an example:

PM: How long will it take to add feedback?

Development: Two sprints (agile development term, usually a 30-day sprint).

PM: What…… ? That’s just a CRUD!

Development: Creating cruDs is easy, but we need to authenticate and authorize users and services. And since the network is unreliable, we need to implement retries and fuses on the client side and make sure we don’t take up the whole system, we need Timeout and Bulkheads, plus we need to detect the problems we need to monitor and track […]

PM: So let’s put it in products and services. Oh dear!

You get the idea that all forms must be met before we can add a huge service (with a lot of code that is not a business function). In this article, we will show how Istio removes all of the above cross issues from our service.

Note: This article assumes knowledge of Kubernetes. If this is not the case, I recommend that you read my introduction to Kubernetes and continue reading this article.

About Istio

In a world without Istio, one service makes a direct request to another, and in the event of a failure, the service needs to handle it by retrying, timeout, turning on fuses, and so on.

To solve this problem, Istio provides an ingenious solution by completely separating itself from the service and intercepting all network traffic. This can be done:

  • Fault Tolerance – Uses a response status code that can be understood if the request fails and retries.

  • Canary Rollouts – Only a specified percentage of requests are forwarded to the new version of the service.

  • Monitoring and metrics – the time taken by the service to respond.

  • Tracing and observability – It adds special headers to each request and tracks them across the cluster.

  • Security – Extract JWT tokens and authenticate and authorize users.

Just a few examples (just a few) to keep you interested! Let’s look at some technical details.

Istio architecture

Istio intercepts all network traffic and applies a set of rules by injecting smart agents into each POD as sidecars. Agents with all capabilities enabled include the data plane, and these agents can be dynamically configured by the control plane.

The data plane

Injected agents make it easy for Istio to meet our requirements. For example, let’s look at retry and fuse functions.

To sum up:

  1. Envoy sends the request to the first instance of service B, but it fails.

  2. Envoy Sidecar retry. (1)

  3. Returns a failed request to invoke the proxy.

  4. This turns on the fuse and invokes the next service on a subsequent request. (2)

This means you don’t have to use another retry library, and you don’t have to develop your own implementation of Circuit Breaking and Service Discovery in programming languages X, Y, or Z. All of this is right out of the box. All of this is done through Istio, and you don’t need to change the code.

Very good! Now you want to join the Istio voyage, but you still have some doubts, some unanswered questions. It’s a one-size-fits-all solution and you’re skeptical of it because it always ends up being a one-size-fits-all solution!

You finally whisper the question, “Is this configurable?”

Welcome my friends to cruise, we will introduce you to the control plane.

Control plane

Consists of three components: Pilot, Mixer, and Citadel, which together use Envoys to route traffic, enforce strategies, and collect telemetry data. As shown in the figure below.

Envoy (i.e. data plane) is configured using Kubernetes custom resource definitions defined by Istio. This means that for you, it’s just another Kubernetes resource with a familiar syntax. Once created, it will be picked up by the control plane and applied to the Envoy.

The relationship between services and Istio

We described the relationship between Istio and our services, but let’s think about it the other way around, what is the relationship between our services and Istio?

Frankly, our services have learned as much about the existence of Istio as fish do about water, asking themselves “What the hell is this water?” .

This means that you can choose a work cluster where the services continue to work after deploying Istio’s components, and in the same way that you can remove components, everything will be fine. Understandably, you will lose the functionality provided by Istio.

We have enough theories, so let’s put them into practice!

Istio practice

Istio requires at least one Kubernetes cluster with 4 Vcpus and 8 GB of RAM. To quickly set up a cluster and follow this article, I recommend using Google Cloud Platform, which offers a $300 free trial for new users.

After creating the cluster and configuring access using the Kubernetes command line tool, we are ready to install Istio using the Helm Package Manager.

Install the Helm

Follow the instructions in the official documentation to install the Helm client on your computer. We’ll use it in the next section to generate the Istio installation template.

Install Istio

Download Istio resources from the latest version and extract the content into a directory we will call istio-Resources.

To easily identify the Istio resource ISTIO-system, create a namespace in the Kubernetes cluster:

 $ kubectl create namespace istio-systemCopy the code

Then go to the [istio-Resources] directory and execute the following command to complete the installation:

 $ helm template install/kubernetes/helm/istio \  --set global.mtls.enabled = false \  --set tracing.enabled = true \  --set kiali.enabled = true \  --set grafana.enabled = true \  --namespace istio-system > istio.yamlCopy the code

The command above outputs the core components of Istio to the file istio.yaml. We use the following parameters to customize the template:

  • Set global.mtls.enabled to false to keep the focus of introductions.

  • Tracing. Enabled Allows tracing requests using Jaeger.

  • Kiali.enabled Installs Kiali in our cluster to visualize services and traffic

  • Grafana.enabled Installs grafana in order to collect metrics for visualization.

Run the following command to apply the generated resources

 $ kubectl apply -f istio.yamlCopy the code

This marks the completion of the Istio installation in our cluster! Wait until all the pods in the IStio-system namespace are in the Running or Completed state to execute the following command:

 $ kubectl get pods -n istio-systemCopy the code

Now we are ready to move on to the next section, where we will get the sample application up and running.

Sentiment Analysis App Architecture

We will use the same microservice application used in the Kubernetes introduction article, which is sufficient to demonstrate Istio’s capabilities in practice.

The application consists of four microservices:

  • The SA-Frontend service provides front-end Reactjs applications.

  • Sa-webapp service: Handles requests for Sentiment Analysis.

  • Sa-logic service: Perform sentiment Analysis.

  • SA feedback service: Receive feedback from users about the accuracy of the analysis.

In Figure 6, in addition to the service, we see that the Ingress Controller routes incoming requests to the appropriate service in Kubernetes, and Istio uses a similar concept called the Ingress Gateway, which will be introduced later in this article.

Run the application using Istio Proxies

To follow up, clone repository istio-mastery (github.com/rinormaloku…) Which contains applications and listings for Kubernetes and Istio.

Sidecar Injection

Injection is done automatically or manually. To enable automatic Sidecar injection, we need istio-injection=enabled to mark the namespace by executing the following command:

 $ kubectl label namespace default istio-injection=enabled namespace/default labeledCopy the code

From now on, every pod deployed to the default namespace will get an injected Sidecar. To verify this, we deploy the sample application by going into the root folder of the [istio-mastery] repository and executing the following command:

 $ kubectl apply -f resource-manifests/kube persistentvolumeclaim/sqlite-pvc created deployment.extensions/sa-feedback created service/sa-feedback created deployment.extensions/sa-frontend created service/sa-frontend created deployment.extensions/sa-logic created service/sa-logic created deployment.extensions/sa-web-app created service/sa-web-app createdCopy the code

In the deployed service, verify that pod has two containers (service and Sidecar) by executing the following command kubectl get Pods, and ensure that when ready, we see the value “2/2” indicating that both containers are running. As follows:

 $ kubectl get pods NAME                           READY     STATUS    RESTARTS   AGE sa-feedback-55f5dc4d9c-c9wfv   2/2       Running   0          12m sa-frontend-558f8986-hhkj9     2/2       Running   0          12m sa-logic-568498cb4d-2sjwj      2/2       Running   0          12m sa-logic-568498cb4d-p4f8c      2/2       Running   0          12m sa-web-app-599cf47c7c-s7cvd    2/2       Running   0          12mCopy the code

The visual representation is shown in Figure 7.

Now that the application is up and running, we need to allow incoming traffic to reach our application.

Entrance to the gateway

The best way to allow traffic to enter a cluster is to place itself on the edge of the cluster through Istio’s entry gateway and implement Istio functions such as routing, load balancing, security, and monitoring on incoming traffic.

During the installation of Istio, the Ingress Gateway component and the services that expose it externally are installed in the cluster. To obtain the external IP address of the service, run the following command:

 $ kubectl get svc -n istio-system -lIstio =ingressgateway NAME TYPE cluster-ip external-ip istio-ingressgateway LoadBalancer 10.0.132.127 13.93.30.120Copy the code

In the rest of this article, we will access the application on this IP (called external-IP), which is saved in a variable for convenience by executing the following command:

$ EXTERNAL_IP=$(kubectl get svc -n istio-system \
  -l app=istio-ingressgateway \
  -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
Copy the code

If you access this IP in your browser and you receive a service unavailable error, by default Istio will block any incoming traffic until we define the gateway.

The gateway resources

The gateway is a Kubernetes custom resource definition that was defined when Istio was installed in our cluster, allowing us to specify the ports, protocols, and hosts that we want to allow incoming traffic.

In our scenario, we want to allow all hosts to use HTTP traffic on port 80. Achieve the following definition:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: http-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
Copy the code

Except for the selector IStio: ingressGateway, all configurations are unexplained. Using this selector, we can specify the Ingress Gateway for the application configuration, which in our example is the default Gateway controller installed on the Istio setup.

To apply the configuration, run the following command:

$ kubectl apply -f resource-manifests/istio/http-gateway.yaml
gateway.networking.istio.io/http-gateway created
Copy the code

The gateway now allows access on port 80, but it does not know where to route requests. This is done using a Virtual Service.

VirtualService resources

VirtualService indicates how the Ingress Gateway routes requests that are allowed into the cluster.

The HTTP gateway must be routed to sa-Frontend, SA-Web-app, and SA-Feedback services (shown in Figure 8) for our upcoming application requests.

Let’s break down the following requests for routing to sa-frontend:

  • **/** The exact path should be routed to sa-frontend to get index.html

  • **/static/*** the prefix path should be routed to the sa-frontend to get any static files required by the Frontend, such as Css and JavaScript files.

  • Matching of regular expressions path ‘^. * \. (ico | PNG | JPG) $’ should be routed to the SA – Frontend, we should put the image resource is routed to the front.

This is done with the following configuration:

kind: VirtualService
metadata:
  name: sa-external-services
spec:
  hosts:
  - "*"
  gateways:
  - http-gateway                      # 1
  http:
  - match:
    - uri:
        exact: /
    - uri:
        exact: /callback
    - uri:
        prefix: /static
    - uri:
        regex: '^.*\.(ico|png|jpg)$'
    route:
    - destination:
        host: sa-frontend             # 2
        port:
          number: 80
Copy the code

The point here is:

  1. This VirtualService applies to requests made over the HTTP gateway

  2. Destination defines the service to which the request is routed.

Note: The above configuration is in the sa-VirtualService-external. yaml file, which also contains configurations for routing to Sa-WebApp and Sa-feedback, but has been shortened for brevity.

Run the following command to apply VirtualService:

$ kubectl apply -f resource-manifests/istio/sa-virtualservice-external.yaml
virtualservice.networking.istio.io/sa-external-services created
Copy the code

Note: When we apply the Istio resource, the Kubernetes API server creates an event that is received by the Istio control plane and then applies the new configuration to each POD Envoy proxy. The Ingress Gateway controller is another Envoy configured by the control plane, as shown in Figure 9.

You can now visit the Sentiment Analysis app at http://{EXTERNAL-IP}/. Don’t worry if you get a Not Found status

Sometimes the configuration needs to take effect to update the envoy’s cache

Before moving on to the next section, use the application to generate some traffic.

Kiali – Observability

To access Kiali’s Admin UI, execute the following command:

$ kubectl port-forward \
    $(kubectl get pod -n istio-system -l app=kiali \
    -o jsonpath='{.items[0].metadata.name}') \
    -n istio-system 20001
Copy the code

And http://localhost:20001/ use admin (without quotation marks) to open the login for the user and password. There are many useful features, such as checking the configuration of Istio components, visualizing services based on information collected by intercepting network requests and answering, “Who is calling whom?” “, “Which version of the service is faulty? Wait, spend some time checking out Kiali’s functionality, and then move on to the next section, visualizing metrics with Grafana!

Grafana – Metric visualization

Grafana was used to divide the indicators collected by Istio into Prometheus and Visualized. To access the Admin UI of Grafana, execute the following command and open http://localhost:3000.

$ kubectl -n istio-system port-forward \
    $(kubectl -n istio-system get pod -l app=grafana \
    -o jsonpath={.items[0].metadata.name}) 3000
Copy the code

Click on the menu Home in the upper left corner and select Istio Service Dashboard and select the Service starting with SA-web-app in the upper left corner and you will see the metrics collected as shown below:

Holy shit, that’s a view without any data, and management would never approve of that. Let’s generate some load by executing the following command:

$ while true; do \
    curl -i http://$EXTERNAL_IP/ Sentiment \ -h "Content-type: Application /json" \-d'{" sentence ":" I love yogobella "} "; \ sleep .8;done
Copy the code

Now we have even prettier charts, plus we have amazing tools from Prometheus for monitoring and Grafana for visualizing metrics that allow us to keep track of service performance, health, upgrade or degrade!

Finally, we’ll look at trace requests throughout the service.

Jaeger – track

We need to keep track, because the more services we have, the harder it is to find the cause of failure. Let’s look at a simple example in the picture below:

Request entry failed,

What’s the reason?

?

First service

?

Or the second one

? There are exceptions to both, so let’s take a look at each log. How many times do you find yourself doing this? We work more like software detectives than developers.

This is a common problem in microservices and is addressed using distributed tracing systems, where services pass unique headers to each other and then forward this information to a distributed tracing system where request tracing is put together. An example is shown in Figure 13.

Istio uses Jaeger Tracer to implement the OpenTracing API, which is a vendor-independent framework. To access the Jaegers UI, execute the following command:

$ kubectl port-forward -n istio-system \
    $(kubectl get pod -n istio-system -l app=jaeger \
    -o jsonpath='{.items[0].metadata.name}') 16686
Copy the code

Then open the UI in http://localhost:16686, select the Sa-Web-app service,

If this parameter is not displayed in the drop-down list box

Service,

Generate some activity on the page and hit Refresh

. Then click the button to find traces, which displays the most recent traces, and select any and all traces for a detailed classification to be displayed, as shown in Figure 14.

Tracking display:

  1. The request comes to the Istio-ingressGateway (it is the first contact with one of the services, so for a request to generate a trace ID) and then the gateway forwards the request to the SA-Web-app service.

  2. In the Sa-Web-app service, the request is picked up by Envoysidecar and a SPAN is created (which is why we see it in the trace) and forwarded to the Sa-Web-app container instance.

  3. Here the method sentimentAnalysis handles the request. These traces are generated by the application, which means code changes are required.

  4. The position from which the POST request sa-logic started. The trace ID needs to be passed by SA-web-app.

5….

Note: At point 4, our application needs to take the headers generated by Istio and pass them on the next request, as shown in the figure below.

Istio does most of the heavy lifting because it generates headers on incoming requests, creates new spans on each sidecar, and passes them around, but without our service passing the headers, we would lose the full trace of the request.

The header to pass is:

x-request-id
x-b3-traceid
x-b3-spanid
x-b3-parentspanid
x-b3-sampled
x-b3-flags
x-ot-span-context
Copy the code

Although this is a simple task, there are a number of libraries that can simplify the process, such as the RestTemplate client passing headers by simply adding the Jaeger and OpenTracing libraries to its dependencies in the SA-Web-app service.

Note: The Sentiment Analysis app shows an implementation of Flask, Spring, and ASP.NET Core.

Now, after investigating us out of the box (or partially out of the box), let’s take a look at the topics here, fine-grained routing, managing network traffic, security, and more!

Traffic management

Using Envoy’s Istio gives your cluster a number of new features to achieve:

  • Dynamic request routing: Canary deployment, A/B testing,

  • Load balancing: simple and consistent hash balancing,

  • Fault recovery: timeout, retry, fuse,

  • Fault injection: delay, abort requests, etc

In this series, we will demonstrate these capabilities in our application and introduce some new concepts in the process. The first concept we will explore is DestinationRules and use those concepts that we will enable A/B testing.

A/B testing – Destination rules in practice

A/B testing was used when we had two versions of the application (often visually different versions) and we weren’t 100% sure we would increase user interaction, so we tried both versions and collected metrics.

Execute the following command to deploy the second version of the front end required to demonstrate A/B testing:

$ kubectl apply -f resource-manifests/kube/ab-testing/sa-frontend-green-deployment.yaml
deployment.extensions/sa-frontend-green created
Copy the code

The green version of the deployment manifest differs in two ways:

  1. The image is based on a different label: istio-green

  2. Pod is marked with Version: Green.

The sa-Frontend of the sa-external-services service is forwarded to all instances as the request is routed through the virtual service as both deployed on the tag APP: Sa-Frontend, and the load is cyclic, which leads to the load balancing problem presented in Figure 16.

These files cannot be found because they are named differently in different versions of the application. Let’s verify:

$ curl --silent http://$EXTERNAL_IP/ | tr '"' '\n' | grep main
/static/css/main.c7071b22.css
/static/js/main.059f8e9c.js
$ curl --silent http://$EXTERNAL_IP/ | tr '"' '\n' | grep main
/static/css/main.f87cd8c9.css
/static/js/main.f7659dbb.js
Copy the code

This means that index. HTML requesting one version of a static file can be load-balanced to provide another version of pod, where it is understood that the other file does not exist.

This means that in order for our application to work properly, we need to introduce the restriction that “the version of the application that serves index.html must be serviced for subsequent requests.”

We’ll do this using Consistent Hash Loadbalancing, which is the process of forwarding requests from the same client to the same backend instance using predefined attributes such as HTTP headers. Courtesy of DestionatioRules.

DestinationRules

After VirtualService routes requests to the correct service, and then using DestinationRules, we can specify the policy that applies to the traffic for this service instance, as shown in Figure 17.

Note: Figure 17 visualizes how Istio resources affect network traffic in an easy-to-understand way. But, to be precise, deciding which instance to forward the request to is made by the Ingress Gateway Envoy configured by CRD.

Using target rules, we can configure load balancing to have consistent hashes and ensure that the same user is responded to by the same service instance. Do as follows:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: sa-frontend
spec:
  host: sa-frontend
  trafficPolicy:
    loadBalancer:
      consistentHash:
        httpHeaderName: version   # 1
Copy the code
  1. Generate a consistent hash based on the contents of the “version” header.

Apply the configuration by executing the following command and give it a try!

$ kubectl apply -f resource-manifests/istio/ab-testing/destinationrule-sa-frontend.yaml
destinationrule.networking.istio.io/sa-frontend created
Copy the code

Execute the following command and verify that the same file was obtained when the version header was specified:

$ curl --silent -H "version: yogo" http://$EXTERNAL_IP/ | tr '"' '\n' | grep main
Copy the code

Note: To facilitate testing in your browser, you can use this Chrome extension to add different values to the version header.

DestinationRules has more LoadBalancing features, all details can be found in the official documentation.

Before continuing to explore VirtualService in more detail, execute the following command to remove the green version of the application and the target rule:

$ kubectl delete -fResource-manifests /kube/ab-testing/sa-frontend-green deployment.yaml deployment.extensions "sa-frontend-green" deleted $  kubectl delete-fThe resource - manifests/istio/ab - testing/destinationrule - sa - frontend. Yaml destinationrule.net working. Istio. IO "sa - frontend" deletedCopy the code

Mirror service – Virtual service in practice

Shadows or mirrors are used when we want to test changes in production without affecting end users, so we mirror the request into a second instance that has the change and evaluates it.

It’s even easier when one of your colleagues selects the most critical issues and makes a huge merge request that no one can really review.

To test this functionality, create a second instance of sa-Logic by executing the following command (

This is buggy

) :

$ kubectl apply -f resource-manifests/kube/shadowing/sa-logic-service-buggy.yaml
deployment.extensions/sa-logic-buggy created
Copy the code

Execute the following command and verify that all instances are marked with the appropriate version, along with the app=sa-logic flag:

$ kubectl get pods -l app=sa-logic --show-labels
NAME                              READY   LABELS
sa-logic-568498cb4d-2sjwj         2/2     app=sa-logic,version=v1
sa-logic-568498cb4d-p4f8c         2/2     app=sa-logic,version=v1
sa-logic-buggy-76dff55847-2fl66   2/2     app=sa-logic,version=v2
sa-logic-buggy-76dff55847-kx8zz   2/2     app=sa-logic,version=v2
Copy the code

When the SA-Logic service target POD is marked as APP = SA-Logic, any incoming requests are load-balanced across all instances, as shown in Figure 18.

But we want to route the request to an instance of version V1 and mirror it to an instance of version V2, as shown in Figure 19.

This is achieved using VirtualService in conjunction with DestinationRule, where the target rule specifies subsets and VirtualService routes to a particular subset.

Use target rules to specify subsets

We define the subset using the following configuration:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: sa-logic
spec:
  host: sa-logic    # 1
  subsets:
  - name: v1        # 2
    labels:
      version: v1   # 3
  - name: v2
    labels:
      version: v2  
Copy the code
  1. Host definition This rule applies only when routing is performed to the SA-Logic service

  2. The subset name used when routing to a subset instance.

  3. Label defines the key-value pairs that need to be matched to make the instance part of a subset.

Application Run the following command to configure the application:

$ kubectl apply -f resource-manifests/istio/shadowing/sa-logic-subsets-destinationrule.yaml
 destinationrule.networking.istio.io/sa-logic created
Copy the code

By defining the subset, we can proceed and configure VirtualService to apply to the request where sa-Logic resides:

  1. Routing to a subset called v1,

  2. Mirrors to a subset called v2.

This is done with the following list:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: sa-logic
spec:
  hosts:
    - sa-logic          
  http:
  - route:
    - destination:
        host: sa-logic  
        subset: v1      
    mirror:             
      host: sa-logic     
      subset: v2
Copy the code

Since everything is self-explanatory, let’s look at its implementation:

$ kubectl apply -f resource-manifests/istio/shadowing/sa-logic-subsets-shadowing-vs.yaml
virtualservice.networking.istio.io/sa-logic created
Copy the code

Add some load by executing the following command:

$ while true; do curl -v http://$EXTERNAL_IP/ Sentiment \ -h "Content-type: Application /json" \-d'{" sentence ":" I love yogobella "} "; \ sleep .8;done
Copy the code

Examining the results in Grafana, where we can see that about 60% of the requests with the faulty version failed, but none of the failures affected the end user because they were responded to by the currently active service.

In this section, we see for the first time the VirtualService applied to our service. When requesting this SA-web-app, Sa-Logic passes the Sidecar envoy, Configured with VirtualService to route to subset V1 and mirror sa-Logic to subset V2 of service.

I can see you thinking “Darn man Virtual Services are easy!” In the next section, we will expand the sentence to “Simply Amazing!” .

Canary deployment

Canary Deployment is a process of rolling out a new version of the application to a small number of users as a step to verify the lack of issues and then to provide a higher quality release guarantee to a wider audience.

We will continue to demonstrate the Canary deployment using the same buggy subset sa-Logic.

Let’s start boldly by applying the Following VirtualService to send 20% of users to the defective version (which represents canary deployment) and 80% of the health service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: sa-logic
spec:
  hosts:
    - sa-logic    
  http:
  - route: 
    - destination: 
        host: sa-logic
        subset: v1
      weight: 80         # 1
    - destination: 
        host: sa-logic
        subset: v2
      weight: 20         # 1
Copy the code
  1. Weights specify the percentage of requests to be forwarded to the target or a subset of the target.

Sa-logic updates the previous virtual service configuration with the following command:

$ kubectl apply -f resource-manifests/istio/canary/sa-logic-subsets-canary-versusyaml
 virtualservice.networking.istio.io/sa-logic configured
Copy the code

We immediately saw that some of our requests failed:

$ while true; do \
   curl -i http://$EXTERNAL_IP/ Sentiment \ -h "Content-type: Application /json" \-d'sentence' : 'I love yogobella'} '\ --silent -w "Time: %{time_total}s \t Status: %{http_code}\n" \ -o /dev/null; sleep .1;done
Time: 0.153075s Status: 200
Time: 0.137581s Status: 200
Time: 0.139345s Status: 200
Time: 30.291806s Status: 500
Copy the code

VirtualServices enabled Canary Deployments, and by doing so, we reduced the potential damage to 20% of the user base. Beautiful! Now, whenever we are insecure about our code, we can use Shadowing and Canary Deployments, in other words, always. 😜

Timeout and retry

The code is not always wrong. The first fallacy on the list of 8 fallacies of distributed computing is that the network is reliable. The network is unreliable, which is why we need timeouts and retries.

For demonstration purposes, we will continue to use the flawed version of SA-Logic, where random failures simulate network unreliability.

One third of the time a defective service takes too long to respond, one third of the time it ends up with an internal server error, and the rest completes successfully.

To alleviate these problems and improve the user experience, we can:

  1. If the service time exceeds 8 seconds, the service times out

  2. Retry failed requests.

This is achieved through the following resource definitions:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: sa-logic
spec:
  hosts:
    - sa-logic
  http:
  - route: 
    - destination: 
        host: sa-logic
        subset: v1
      weight: 50
    - destination: 
        host: sa-logic
        subset: v2
      weight: 50
    timeout: 8s           # 1
    retries:
      attempts: 3         # 2
      perTryTimeout: 3s   # 3
Copy the code
  1. Request timeout is 8 seconds,

  2. It tried three times,

  3. If the attempt takes longer than 3 seconds, an attempt is made to mark the request as failed.

This is an optimization because the user does not wait more than eight seconds, and we retry three times in the event of a failure, increasing the chance that the response will result in success.

Use the following command to apply the updated configuration:

$ kubectl apply -f resource-manifests/istio/retries/sa-logic-retries-timeouts-vs.yaml
virtualservice.networking.istio.io/sa-logic configured
Copy the code

And look at the Grafana chart to see the improvement in success rate (figure 21).

Before sa-logic-buggy enters the next section delete and VirtualService by executing the following command:

$kubectl delete deployment sa-logic-buggy deployment.extensions "sa-logic-buggy" deleted $kubectl delete Virtualservice sa - logic virtualservice.net working. Istio. IO "sa - logic" doesCopy the code

Fuses and isolation modes

Two important patterns in the microservice architecture enable self-healing of services.

The fuse is used to prevent requests from being treated as an instance of an unhealthy service and to enable it to recover, during which time the client requests are forwarded to the health of the service (increased success rate).

The isolation mode The whole system relegated to isolate errors, to prevent the error propagation, an isolation fault examples, service B in damaged condition and other services (client service B) requests to the service B, this will lead to the client will use its own thread pool, will not be able to provide other requests (even if the request is not related to service B).

I’ll skip over the implementation of these patterns because you can look at the implementation in the official documentation, and I’m excited to show authentication and authorization, which will be the subject of the next article.

Part I – Summary

In this article, we deployed Istio in the Kubernetes cluster and enabled the following functionality with its custom resource definitions (such as gateways, VirtualServices, DestinationRules and their components) :

  • Using Kiali, observe our services by looking at the services that are running, how they perform, and their relationships.

  • Collection and visualization using Prometheus and Grafana.

  • Request Jaeger to follow (Hunter in German).

  • Full fine-grained control of network traffic, implementing Canary Deployments, A/B testing and Shadowing.

  • Easily implement retry, timeout, and CircuitBreakers.

All of this can be done without code changes or any other dependencies, keeping your services small, easy to operate and maintain

For your development team, eliminating these cross-domain issues and centralizing them into Istio’s control plane means that new services are easy to introduce and don’t take up a lot of resources because developers can focus on solving business problems. So far, no developer has complained about “having to solve an interesting business problem!” .

I’d love to hear your thoughts in the comments below and feel free to contact me on Twitter or on my page at rinormaloku.com, and stay tuned for the next article as we address the final layer of authentication and authorization!