preface

The Istio architecture is changed again. Starting from 1.5, all components of the control plane are combined into a single structure called ISTIOD, which saves a lot of trouble for O&M deployment. The Mixer component was removed, and the new version of HTTP telemetry was based on in-proxy Stats Filter by default, while in-proxy extensions could be developed using WebAssembly.

The Istio architecture diagram is shown below:

Past lives

In this life

Environmental Requirements [1]

Kubernetes version support

  • Officially, Istio 1.5 has been tested on the following Kubernetes releases: 1.14, 1.15, 1.16.

Kubernetes Pod and Service requirements

As part of the Istio Service grid, pods and services in the Kubernetes cluster must meet the following requirements:

  • Named Service port: The port of the Service must be named. The port name key-value pair must be in the following format:name: <protocol>[-<suffix>]. See for more informationProtocol selection.
  • Service association: Each Pod must belong to at least one Kubernetes Service, regardless of whether the Pod has exposed ports. If a Pod belongs to multiple Kubernetes services, these services cannot use different protocols (e.g. HTTP and TCP) on the same port number.
  • Deployment with APP and Version labels (label) : We recommend explicitly labeling Deployment with APP and Version. Adding these tags to a Pod Deployment configuration using Kubernetes Deployment adds context information to the metrics and telemetry information Istio collects.
    • App tag: Each deployment configuration should have a different APP tag and its value should make sense. App Label is used to add context information to distributed tracing.
    • Version tag: This tag is used to represent the version in an application deployed in a particular way.
  • Application UID: Make sure your Pod doesn’t run the application as a user with user ID (UID) 1337.
  • NET_ADMIN: If your cluster implements Pod security policies, you must configure NET_ADMIN for pods. If you use the Istio CNI plugin, you don’t need to configure it. To learn more about NET_ADMIN’s capabilities, see the Pod capabilities required.

Download Istio 1.15

Note: Linux operating system

1. Download the Istio 1.15.1 installation package

$ cd /usr/local/ SRC / $wget https://github.com/istio/istio/releases/download/1.5.1/istio-1.5.1-linux.tar.gz $tar xf Istio 1.5.1 - Linux. Tar. GzCopy the code

2. Switch the Istio package directory

$ cdIstio - 1.5.1Copy the code

The installation directory contains the following contents:

  • YAML installation files related to Kubernetes are available in the install/kubernetes directory
  • Under the samples/ directory, there are sample applications
  • The bin/ directory contains the istioctl client file. The Istioctl tool is used to manually inject the Envoy Sidecar proxy.

3. Add the istioctl command to environment variables

# Add a line to ~/.bashrc
$ vim ~/.bashrc

PATH="$PATH: / usr/local/SRC/istio - 1.5.1 / bin"

# Application takes effect
$ source ~/.bashrc
Copy the code

4. Configure istioctl parameter auto-completion

$ vim ~/.bashrc

source /usr/local/ SRC/istio 1.5.1 / tools/istioctl bash# Application takes effect
$ source ~/.bashrc
Copy the code

The deployment of

Istioctl provides a variety of installation profiles that differ:

Installation configuration Feed:

  • defaultOn the basis ofGrafana,istio-tracing,kialiThe add-on
  • cniThe configuration is disabled, but related parameters have been configured
  • Global disableTLS
  • Grafana,istio-tracing,kiali,prometheusthroughistio-ingressgatewayexposed
  • To rule out192.168.16.0/20192168. 32.0/20K8s SVC and K8S Pod two network segments
  • Ingress GatewaypilotEnable 2 PODS (default: 1 POD)
  • Pod Binds the node labelzone: sz
  • Ingress GatewayuseHostNetworkmode
  • overlaysField is used to modify individual resource objects of the corresponding componentmanifest
  • Adjust thePDBconfiguration
  • It needs to be created before installationgrafanakialiSecret, for login
  • Ingress GatewayFor security reasons, unnecessary ports should not be exposed. In the case of the Ingress Gateway, only the HTTP, HTTPS, and Metrics ports should be exposed

configurationgrafanakiali secret

Create istio – system namespaces

$ kubectl create ns istio-system
Copy the code

Configuration kiali secret

$ KIALI_USERNAME=$(echo -n 'admin' | base64)
$ KIALI_PASSPHRASE=$(echo -n 'admin' | base64)
$ NAMESPACE=istio-system

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: kiali
  namespace: $NAMESPACE
  labels:
    app: kiali
type: Opaque
data:
  username: $KIALI_USERNAME
  passphrase: $KIALI_PASSPHRASE
EOF
Copy the code

Configuration grafana secret

$ GRAFANA_USERNAME=$(echo -n 'admin' | base64)
$ GRAFANA_PASSPHRASE=$(echo -n 'admin' | base64)
$ NAMESPACE=istio-system

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: grafana
  namespace: $NAMESPACE
  labels:
    app: grafana
type: Opaque
data:
  username: $GRAFANA_USERNAME
  passphrase: $GRAFANA_PASSPHRASE
EOF
Copy the code

It is recommended to useOperatorMode is used heredefaultConfiguration deployment (Default is also used in production)

Create the Istio Operator YAMl configuration

$vim istio - 1.5.1. YamlCopy the code
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: Istio 1.5.1 -- controlplane
spec:
  hub: docker.io/istio
  profile: default Use the default configuration
  tag: 1.51.

  addonComponents:
    grafana:
      enabled: true # the default false
      k8s:
        replicaCount: 1
    kiali:
      enabled: true # the default false
      k8s:
        replicaCount: 1
    prometheus:
      enabled: true # the default true
      k8s:
        replicaCount: 1
    tracing:
      enabled: true # the default false

  values:
    global:
      imagePullPolicy: IfNotPresent # Mirror pull policy
      mtls:
        enabled: false # globally disable security
      defaultResources: Declare the default container resource
        requests:
          cpu: 30m
          memory: 50Mi
      proxy:
        accessLogFile: "/dev/stdout"
        includeIPRanges: 192.16816.. 0/20,192.168.32.0/20
        autoInject: disabled If the value is enabled, the Pods will be injected automatically as long as they are not annotated as Sidecar.istio. IO /inject: "false". If the value is disabled, sidecar.istio. IO /inject: "true" is required for pod to be injected
        clusterDomain: cluster.local # cluster DNS domain
        resources:
          requests:
            cpu: 30m # 100 m by default
            memory: 50Mi # 128 mi by default
          limits:
            cpu: 400m # 2000 m by default
            memory: 500Mi # 1024 mi by default
      proxy_init:
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 30m # 10 MB by default
            memory: 50Mi # 10 mi by default
    sidecarInjectorWebhook:
      enableNamespacesByDefault: false The # true variable will enable auto-injection for all namespaces. If the value is false, auto injection is enabled only for namespaces labeled istio-injection
      rewriteAppHTTPProbe: false # If true, Webhook or Istioctl Injector will rewrite PodSpec for live health checks to redirect requests to Sidecar. This allows activity checking to work even when mTLS is enabled
    cni:
      excludeNamespaces: # when cnI is enabled, the following namespaces are excluded
        - istio-system
        - kube-system
        - monitoring
        - kube-node-lease
        - kube-public
        - kubernetes-dashboard
        - ingress-nginx
      logLevel: info
    pilot:
      autoscaleEnabled: true
      autoscaleMax: 5
      autoscaleMin: 1
      cpu:
        targetAverageUtilization: 80
    prometheus:
      contextPath: /prometheus # the default/Prometheus
      hub: docker.io/prom
      resources: # default unlimited
        requests:
          cpu: 30m
          memory: 50Mi
        limits:
          cpu: 500m
          memory: 1024Mi
      nodeSelector:
        zone: "sz"
      retention: 7d # 6 h by default
      scrapeInterval: 15s
      security:
        enabled: true
      tag: v2.15.1
    grafana:
      contextPath: /grafana  # / grafana by default
      accessMode: ReadWriteMany
      image:
        repository: grafana/grafana
        tag: 6.52.
      resources:
        requests:
          cpu: 30m
          memory: 50Mi
        limits:
          cpu: 300m
          memory: 500Mi
      nodeSelector:
        zone: "sz"
      security: # Disable authentication by default
        enabled: true # the default false
        passphraseKey: passphrase Create grafana Secret first
        secretName: grafana
        usernameKey: username Create grafana Secret first
    kiali:
      contextPath: /kiali  # / kiali by default
      createDemoSecret: false
      dashboard:
        grafanaInClusterURL: http://grafana.example.com # the default http://grafana:3000
        jaegerInClusterURL: http://tracing.example.com # the default http://tracing/jaeger
        passphraseKey: passphrase # Create Kiali Secret first
        secretName: kiali
        usernameKey: username # Create Kiali Secret first
        viewOnlyMode: false
      hub: kiali  # the default quay. IO/kiali
      resources:
        limits:
          cpu: 300m
          memory: 900Mi
        requests:
          cpu: 30m
          memory: 50Mi
      nodeSelector:
        zone: "sz"
      tag: v1.14
    tracing:
      provider: jaeger # Select the tracking service
      jaeger:
        accessMode: ReadWriteMany
        hub: docker.io/jaegertracing
        tag: "1.16"
        resources:
          limits:
            cpu: 300m
            memory: 900Mi
          requests:
            cpu: 30m
            memory: 100Mi
      nodeSelector:
        zone: "sz"
      opencensus:
        exporters:
          stackdriver:
            enable_tracing: true
        hub: docker.io/omnition
        resources:
          limits:
            cpu: "1"
            memory: 2Gi
          requests:
            cpu: 100m # 200 m by default
            memory: 300Mi # 400 mi by default
        tag: 0.19.
      zipkin:
        hub: docker.io/openzipkin
        javaOptsHeap: 700
        maxSpans: 500000
        node:
          cpus: 2
        resources:
          limits:
            cpu: 300m
            memory: 900Mi
          requests:
            cpu: 30m # 150 m by default
            memory: 100Mi # 900 mi by default
        tag: 2.142.

  components:
    cni:
      enabled: false Cni is disabled by default
    ingressGateways:
    - enabled: true
      k8s:
        hpaSpec:
          minReplicas: 2 The default minimum is 1 POD
        service:
          type: ClusterIP The default type is LoadBalancer
        resources:
          limits:
            cpu: 1000m # 2000 m by default
            memory: 1024Mi # 1024 mi by default
          requests:
            cpu: 100m # 100 m by default
            memory: 128Mi # 128 mi by default
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 25%
        nodeSelector:
          zone: "sz"
        overlays:
        - apiVersion: apps/v1 IngressGateways is exposed in hostNetwork mode
          kind: Deployment
          name: istio-ingressgateway
          patches:
          - path: spec.template.spec.hostNetwork
            value:
              true
          - path: spec.template.spec.dnsPolicy
            value:
              ClusterFirstWithHostNet
        - apiVersion: v1 For security reasons, you should not expose unnecessary ports. In the case of the Ingress Gateway, only the HTTP, HTTPS, and metrics ports are exposed
          kind: Service
          name: istio-ingressgateway
          patches:
          - path: spec.ports
            value:
            - name: status-port
              port: 15020
              targetPort: 15020
            - name: http2
              port: 80
              targetPort: 80
            - name: https
              port: 443
              targetPort: 443
    pilot:
      enabled: true
      k8s:
        hpaSpec:
          minReplicas: 2 The default minimum is 1 POD
        resources:
          limits:
            cpu: 1000m # default no limit
            memory: 1024Mi # default no limit
          requests:
            cpu: 100m # 500 m by default
            memory: 300Mi # 2048 mi by default
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 25%
        nodeSelector:
          zone: "sz"
        overlays: Adjust the PDB configuration
        - apiVersion: policy/v1beta1
          kind: PodDisruptionBudget
          name: istiod
          patches:
          - path: spec.selector.matchLabels
            value:
              app: istiod
              istio: pilot
Copy the code

Deploy Istio

$ istioctl manifest apply -fIstio - 1.5.1. YamlCopy the code

Grafana, istio-tracing, Kiali, and Prometheus were exposed through istio-ingressGateway

$ vim istio-addon-components-gateway.yaml
Copy the code
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-addon-components-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "grafana.example.com"
    - "tracing.example.com"
    - "kiali.example.com"
    - "prometheus.example.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istio-grafana
  namespace: istio-system
spec:
  hosts:
  - "grafana.example.com"
  gateways:
  - istio-addon-components-gateway
  http:
  - route:
    - destination:
        host: grafana
        port:
          number: 3000
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istio-tracing
  namespace: istio-system
spec:
  hosts:
  - "tracing.example.com"
  gateways:
  - istio-addon-components-gateway
  http:
  - route:
    - destination:
        host: tracing
        port:
          number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istio-kiali
  namespace: istio-system
spec:
  hosts:
  - "kiali.example.com"
  gateways:
  - istio-addon-components-gateway
  http:
  - route:
    - destination:
        host: kiali
        port:
          number: 20001
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istio-prometheus
  namespace: istio-system
spec:
  hosts:
  - "prometheus.example.com"
  gateways:
  - istio-addon-components-gateway
  http:
  - route:
    - destination:
        host: prometheus
        port:
          number: 9090
Copy the code

Deploy Grafana, istio-Tracing, Kiali, Prometheus service Gateway, and VirtualService

$ kubectl apply -f istio-addon-components-gateway.yaml
Copy the code

After the deployment is complete, check the status of each component:

$ kubectl get svc,pod,hpa,pdb,Gateway,VirtualService -n istio-system
Copy the code

Refer to the link

  • [1] istio. IO/useful/docs/ops…
  • Istio. IO/docs/setup /…
  • www.danielstechblog.io/high-availa…

This article is published by YP station!