Istio environment has been set up complete, we need to begin to understand the various mechanisms of Istio provides a micro service grid, which is this article title (replication, overtime control, fuse, flow rate control) official very awesome to prepare an instance of the program also don’t need everyone to write your own demo to test, when it comes to run.

Attach:

A meow blog :w-blog.cn

Istio official address :preliminary.istio. IO /zh

Istio Chinese document: preliminary. Istio. IO/useful/docs /

PS: This section is based on the latest ISTIO version 1.0.3

I. Timeout control

In the actual request process, we often give the corresponding service a timeout period to ensure sufficient user experience. Hard coding is not ideal,Istio provides the corresponding timeout control method:

1. Restore all route configurations:

Kubectl apply -n istio - test - f istio - 1.0.3 / samples/bookinfo/networking/virtual - service - all - v1. YamlCopy the code

You can set request timeout for HTTP requests in the timeout field of the routing rule. By default, the timeout is set to 15 seconds, and in this task, the Reviews service timeout is set to one second. In order to see the effect of the Settings, you also need to add a two-second delay to the call to the ratings service.

2. Point to the V2 version

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v2
EOFCopy the code

3. Add a two-second delay to calls to the ratings service:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ratings
spec:
  hosts:
  - ratings
  http:
  - fault:
      delay:
        percent: 100
        fixedDelay: 2s
    route:
    - destination:
        host: ratings
        subset: v1
EOFCopy the code

4. Next add a one-second request timeout for the purpose of the reviews:v2 service:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v2
    timeout: 0.5s
EOFCopy the code

It returns after 1 second (even if the timeout is set to half a second, the reason the response takes 1 second is because there is a hard-coded retry productPage in the service, so it reviews to call the timeout service twice before returning).

Two, fuse

In micro service has important service has not important, although can through K8S control CPU consumption, but the basic control can’t satisfy to control the number of concurrent requests, such as A limited number of concurrent 100 service, service limits 10 B, can through the concurrency limit, this time without CPU inaccurate limit in this way

1. Run the test program

> kubectl apply - n istio - test - f istio 1.0.3 / samples/httpbin/httpbin yamlCopy the code

2. Create a target rule to set the circuit breaker for the httpbin service:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 1
      http:
        http1MaxPendingRequests: 1
        maxRequestsPerConnection: 1
    outlierDetection:
      consecutiveErrors: 1
      interval: 1s
      baseEjectionTime: 3m
      maxEjectionPercent: 100
EOFCopy the code

3. You will use a simple load test client called Fortio. This client can control the number of connections, the number of concurrent requests, and the delay in sending HTTP requests. Using this client, you can effectively trigger the circuit breaker policy set earlier in the target rule.

> kubectl apply -n istio - test - f istio - 1.0.3 / samples/httpbin/sample - client/fortio - deploy. Yaml > FORTIO_POD = $(kubectl get  -n istio-test pod | grep fortio | awk '{ print $1 }') > kubectl exec -n istio-test -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load-curl http://httpbin:8000/get HTTP/1.1 200 OK server: envoy Date: Wed, 07 Nov 2018 06:52:32 GMT content-type: application/json access-control-allow-origin: * access-control-allow-credentials: true content-length: 365 x-envoy-upstream-service-time: 113 { "args": {}, "headers" : {" Content - Length ":" 0 ", "the Host" : "httpbin: 8000", "the user-agent" : "istio/fortio - 1.0.1", "X - B3 - the javax.media.sound.sampled" : "1", "X-B3-Spanid": "a708e175c6a077d1", "X-B3-Traceid": "a708e175c6a077d1", "X-Request-Id": "62D09db5-550A-9b81-80d9-6d8f60956386"}, "origin": "127.0.0.1", "url": "http://httpbin:8000/get"}Copy the code

4. Specify maxConnections: 1 and http1MaxPendingRequests: 1 in the fuse Settings above. This means that if more than one connection is made simultaneously, Istio will fuse, blocking subsequent requests or connections and attempting to trigger a circuit breaker.

> kubectl exec -n istio-test -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get 06:54:16 I Logger. go:97> Log level is now 3 Warning (was 2 Info) Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 20 calls: http://httpbin:8000/get Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0) Ended After 96.058168 MS: 20 calls. QPS =208.21 Aggregated Function Time: Count 20 avg 0.0084172288 +/ -0.004876 min 0.000173288 Max 0.016515793 sum 0.168344576 # range, mid point, percentile Count >= 0.000583248 <= 0.001, 0.000791624, 5.00, 1 > 0.001 <= 0.002, 0.0015, 25.00, 4 > 0.006 <= 0.007, 0.0065, 30.00, 1 > 0.007 <= 0.008, 0.0075, 35.00, 1 > 0.008 <= 0.009, 0.0085, 55.00, 4 > 0.009 <= 0.01, 0.0095, 65.00, 2 > 0.01 <= 0.011, 0.0105, 75.00, 2 > 0.011 <= 0.012, 0.0115, 80.00, 1 > 0.012 <= 0.014, 0.013, 85.00, 1 > 0.014 <= 0.016, 0.015, 95.00, 2 > 0.016 <= 0.0165158, 0.0162579, 100.00, 1 # target 99.9% 0.0164126 # target 99.9% 0.0165055 Sockets Used: 7 (for perfect keepalive, would be 2) Code 200:15 (75.0 %) Code 503:5 (25.0 %) Response Header Sizes: Count 20 AVg 172.7 +/ -99.71 min 0 Max 231 sum 3454 Response Body/Total Sizes: Count 20 AVg 500.7 +/ -163.8 min 217 Max 596 sum 10014 All done 20 calls (plus 0 warmup) 8.417 ms AVg, 208.4qpsCopy the code

As you can see here, almost all of the requests were approved. Istio-proxy allows for some error

Code 200:15 (75.0 %) Code 50:5 (25.0%)Copy the code

5. Next increase the number of concurrent connections to 3:

> kubectl exec -n istio-test -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get 06:55:28 I Logger. go:97> Log level is now 3 Warning (was 2 Info) Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 30 calls: http://httpbin:8000/get Starting at max qps with 3 thread(s) [gomax 4] for exactly 30 calls (10 per thread + 0) Ended After 59.921126 MS: 30 calls.qps =500.66 Aggregated Function Time: Count 30 avg 0.0052897259 +/ -0.006296min 0.00063091 Max 0.024999538 sum 0.158691777 # range, mid point, percentile, Count >= 0.000633091 <= 0.001, 0.0016546, 16.67, 5 > 0.001 <= 0.002, 0.0015, 63.33, 14 > 0.002 <= 0.003, 0.0025, 66.67, 1 > 0.008 <= 0.009, 0.0085, 73.33, 2 > 0.009 <= 0.01, 0.0095, 80.00, 2 > 0.01 <= 0.011, 0.0105, 83.33, 1 > 0.011 <= 0.012, 0.0115, 86.67, 1 > 0.012 <= 0.014, 0.013, 90.00, 1 > 0.014 <= 0.016, 0.015, 93.33, 1 > 0.02 <= 0.0249995, 0.0224998, 100.00, 2 # target 99.9% 0.0242496 # target 99.9% 0.0249245 Sockets Used: 22 (for perfect keepalive, would be 3) Code 200:10 (33.3%) Code 503: Sockets Used: 22 (for perfect keepalive, would be 3) Code 200:10 (33.3%) Code 503: 20 (66.7%) Response Header Sizes: count 30 AVG 76.833333 +/ -108.7 min 0 Max 231 sum 2305 Response Body/Total Sizes: Count 30 AVg 343.16667 +/ -178.4 min 217 Max 596 sum 10295 All done 30 calls (plus 0 warmup) 5.290 ms AVg, 500.7qpsCopy the code

At this point, you observe that the circuit breaker behavior works as designed, with only 33.3% of requests being accepted and the rest blocked by the circuit breaker

You can query the status of ISTIO-Proxy to obtain more information:

> kubectl exec -n istio-test -it $FORTIO_POD  -c istio-proxy  -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pendingCopy the code

Finally, clean up the rules and services

kubectl delete -n istio-test destinationrule httpbin
kubectl delete -n istio-test deploy httpbin fortio-deploy
kubectl delete -n istio-test svc httpbinCopy the code

Traffic replication

The overview of diversion was mentioned earlier in flow control. V1 and V2 both bear 50% of the traffic, but there is another scenario where Istio traffic replication is used, which is a powerful feature to bring changes to production with the lowest possible risk.

When we need to launch a program that we don’t confirm when we want it to run for a period of time to see the stability, but don’t want to let users access the unstable service flow to copy at this time to play a role, flow replication can request to 100% in V1 and in which the request is sent to the V2 a 10% but it does not care about it The return.

1. Let’s create two versions of httpbin services for our experiment

httpbin-v1

> kubectl apply -n istio-test -f - <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: httpbin-v1 spec: replicas: 1 template: metadata: labels: app: httpbin version: v1 spec: containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin command: [" gunicorn ", "- access - logfile", "-", "b", "0.0.0.0:80", "httpbin: app"] ports: - containerPort: 80 EOFCopy the code

httpbin-v2:

> kubectl apply -n istio-test -f - <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: httpbin-v2 spec: replicas: 1 template: metadata: labels: app: httpbin version: v2 spec: containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin command: [" gunicorn ", "- access - logfile", "-", "b", "0.0.0.0:80", "httpbin: app"] ports: - containerPort: 80 EOFCopy the code

httpbin Kubernetes service:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
spec:
  ports:
  - name: http
    port: 8000
    targetPort: 80
  selector:
    app: httpbin
EOFCopy the code

Curl curl curl curl curl curl curl curl curl curl curl

> kubectl apply -n istio-test -f - <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: sleep
    spec:
      containers:
      - name: sleep
        image: tutum/curl
        command: ["/bin/sleep","infinity"]
        imagePullPolicy: IfNotPresent
EOFCopy the code

By default, Kubernetes loads between two versions of the Httpbin service. This behavior is changed in this step to route all traffic to V1.

Create a default routing rule to route all traffic to service v1:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  hosts:
    - httpbin
  http:
  - route:
    - destination:
        host: httpbin
        subset: v1
      weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
EOFCopy the code

Send some traffic to the service:

> export SLEEP_POD=$(kubectl get -n istio-test pod -l app=sleep -o jsonpath={.items.. metadata.name}) > kubectl exec -n istio-test -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8000/headers' | python -m json.tool { "headers": { "Accept": "*/*", "Content-Length": "0", "Host": "httpbin:8000", "User-Agent": "Curl /7.35.0"," X-B3-SPAN ": "1", "X-B3-SPANID ":" 8E32159D042D8a75 ", "X-B3-tracEID ":" 8E32159D042D8a75 "}}Copy the code

View the v1 and v2 logs for httpbin Pods. You can see the access log for v1 and the log for v2:

> export V1_POD=$(kubectl get -n istio-test pod -l app=httpbin,version=v1 -o jsonpath={.items.. Metadata. name}) > kubectl logs -n istio-test -f $V1_POD -c httpbin 127.0.0.1 - - [07/Nov/2018:07:22:50 +0000] "GET /headers HTTP/1.1" HTTP "-" HTTP"Copy the code

> export V2_POD=$(kubectl get -n istio-test pod -l app=httpbin,version=v2 -o jsonpath={.items.. metadata.name}) > kubectl logs -n istio-test -f $V2_POD -c httpbin <none>Copy the code

2. Mirroring traffic to V2 running rules

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  hosts:
    - httpbin
  http:
  - route:
    - destination:
        host: httpbin
        subset: v1
      weight: 100
    mirror:
      host: httpbin
      subset: v2
EOFCopy the code

This routing rule sends 100% of the traffic to V1. The last section specifies mirroring to the Httpbin :v2 service. When traffic is mirrored, the request is sent to the mirroring service via its host/authorization header with -shadow. For example, change cluster-1 to cluster-1-shadow.

In addition, it is important to note that these mirrored requests are “fire and forget”, which means that the response triggered by these requests is discarded.

3. Try sending traffic again

> kubectl exec -n istio-test -it $SLEEP_POD -c sleep -- sh -c 'curl  http://httpbin:8000/headers' | python -m json.toolCopy the code

4. You can see that access logs are stored in v1 and v2. The access log in v2 is generated by mirroring traffic, and the actual target of these requests is V1:

> kubectl logs -n istio-test -f $V1_POD -c httpbin 127.0.0.1 - - [07/Nov/2018:07:22:50 +0000] "headers HTTP/1.1" 200, 241, "-" "curl / 7.35.0" 127.0.0.1 - [07 / Nov / 2018:07:26:58 + 0000] "the GET/headers HTTP / 1.1" 200 241 "-" "curl / 7.35.0"Copy the code

Kubectl logs -n istio-test -f $V2_POD -c httpbin 127.0.0.1 - - [07/Nov/2018:07:28:37 +0000] "GET /headers HTTP/1.1" 200 281 ", "" curl / 7.35.0"Copy the code

5. Clean up

istioctl delete -n istio-test virtualservice httpbin
istioctl delete -n istio-test destinationrule httpbin
kubectl delete -n istio-test deploy httpbin-v1 httpbin-v2 sleep
kubectl delete -n istio-test svc httpbinCopy the code