Istio 1.4 Deployment Guide

Istio is undergoing rapid iterative updates, and its deployment methods are constantly being updated. The installation methods I described in version 1.0 are no longer applicable to the latest version 1.4. After the mainstream deployment mode is istioctl deployment, helm can gradually be sidelined ~~

Before deploying Istio, you first need to ensure that the Kubernetes cluster (Kubernetes version 1.13 and above is recommended) has deployed and configured the local Kubectl client.

1. Kubernetes environment preparation

To quickly prepare the Kubernetes environment, we can deploy sealOS using the following steps:

The premise condition

  • Download the Kubernetes offline installation package
  • Download the latest sealOS
  • The server time must be synchronized
  • The host name cannot be repeated

Install the Kubernetes cluster

$SEALos init --master 192.168.0.2 \ --node 192.168.0.3 \ --node 192.168.0.4 \ --node 192.168.0.5 \ --user root \ --passwd your-server-password \ --version v1.16.3 \ --pkg-url /root/kube1.16.3.tar.gzCopy the code

Check whether the installation is normal:

$kubectl get node NAME STATUS ROLES AGE VERSION SEALos01 Ready Master 18h v1.16.3 SEALos02 Ready < None > 18H v1.16.3 Sealos03 Ready < None > 18H V1.16.3 SEALOS04 Ready < None > 18H V1.16.3Copy the code

2. Download the Istio deployment file

You can download istio from GitHub’s Release page or directly from the following command:

$ curl -L https://istio.io/downloadIstio | sh -
Copy the code

When the download is complete, you will get an ISTIO-1.4.2 directory containing:

  • install/kubernetes: Installation file for Kubernetes platform
  • samples: Example application
  • bin: istioctl binary file that can be used to manually inject sidecar Proxy

The istiO-1.4.2 directory is displayed.

$ cdIstio-1.4.2 $Tree Flag 1. /./ ├─ bin ├─ Demo. Yaml ├─ Install ├─ Manifest.Yaml ├─ readme.md ├─ samples └─ tools 4 directories, 4 filesCopy the code

Copy istioctl to /usr/local/bin/ :

$ cp bin/istioctl /usr/local/bin/
Copy the code

Enable the istioctl auto-completion function

bash

Copy istioctl.bash from tools to $HOME:

$ cp tools/istioctl.bash ~/
Copy the code

Add a line to ~/.bashrc:

source ~/istioctl.bash
Copy the code

Application effectiveness:

$ source ~/.bashrc
Copy the code

zsh

Copy _istioctl from tools to $HOME:

$ cp tools/_istioctl ~/
Copy the code

Add a line to ~/.zshrc:

source ~/_istioctl
Copy the code

Application effectiveness:

$ source ~/.zshrc
Copy the code

3. Deploy Istio

Istioctl provides a variety of installation configuration files, which can be viewed using the following command:

$ istioctl profile list

Istio configuration profiles:
    minimal
    remote
    sds
    default
    demo
Copy the code

The differences between them are as follows:

default demo minimal sds remote
Core components
istio-citadel X X X X
istio-egressgateway X
istio-galley X X X
istio-ingressgateway X X X
istio-nodeagent X
istio-pilot X X X X
istio-policy X X X
istio-sidecar-injector X X X X
istio-telemetry X X X
The add-on
Grafana X
istio-tracing X
kiali X
prometheus X X X

The X mark indicates that the component should be installed.

If you just want a quick trial and experience the full functionality, you can deploy it directly using the configuration file Demo.

Before formal deployment, two points need to be noted:

Istio CNI Plugin

The current default way to forward user POD traffic to proxy is to use init container istio-init of Privileged privilege (run script and write iptables). You need NET_ADMIN capabilities. For those of you who don’t know Linux Capabilities, check out my Linux Capabilities series.

The main design goal of the Istio CNI plug-in is to remove the Init Container with privileged rights and replace it with an alternative that uses the Kubernetes CNI mechanism to achieve the same functionality. Kubernetes CNI plugins add Istio processing logic at the end of the plugin chain, create and destroy pod hook points for Istio pod network configuration: Write iptables to forward the network traffic of the network namespace where pod resides to the proxy process.

Please refer to the official documentation for details.

Using the Istio CNI plug-in to create sidecar iptables rules is definitely the way to go, so let’s try it now.

Kubernetes (Critical Add-on Pods)

While the core components of Kubernetes are known to run on the Master node, there are also add-ons that are critical to the entire cluster, such as DNS and Metrics-Server, which are called critical plug-ins. Kubernetes uses PriorityClass to ensure that key plug-ins are scheduled and run properly. To make an application a critical Kubernetes plug-in, you only need to set its priorityClassName to system-cluster-critical or system-node-critical. System-node-critical has the highest priority.

Note: Key plug-ins can only run in a Kube-system namespace!

Please refer to the official documentation for details.

To install ISTIO, run the following command:

$ istioctl manifest apply --set profile=demo \
   --set cni.enabled=true --set cni.components.cni.namespace=kube-system \
   --set values.gateways.istio-ingressgateway.type=ClusterIP
Copy the code

Istioctl supports two apis:

  • IstioControlPlane API
  • Helm API

In the installation commands above, the CNI parameter uses the IstioControlPlane API, while values.* uses the Helm API.

After the deployment is complete, check the status of each component:

$ kubectl -n istio-system get pod

NAME                                      READY   STATUS    RESTARTS   AGE
grafana-6b65874977-8psph                  1/1     Running   0          36s
istio-citadel-86dcf4c6b-nklp5             1/1     Running   0          37s
istio-egressgateway-68f754ccdd-m87m8      0/1     Running   0          37s
istio-galley-5fc6d6c45b-znwl9 1/1 Running 0 38s istio-ingressgateway-6d759478d8-g5zz2 0/1 Running 0 37s istio-pilot-5c4995d687-vf9c6 0/1  Running 0 37s istio-policy-57b99968f-ssq28 1/1 Running 1 37s istio-sidecar-injector-746f7c7bbb-qwc8l 1/1 Running 0 37s istio-telemetry-854d8556d5-6znwb 1/1 Running 1 36s istio-tracing-c66d67cd9-gjnkl             1/1     Running   0          38s
kiali-8559969566-jrdpn                    1/1     Running   0          36s
prometheus-66c5887c86-vtbwb               1/1     Running   0          39s
Copy the code
$ kubectl -n kube-system get pod -l k8s-app=istio-cni-node

NAME                   READY   STATUS    RESTARTS   AGE
istio-cni-node-k8zfb   1/1     Running   0          10m
istio-cni-node-kpwpc   1/1     Running   0          10m
istio-cni-node-nvblg   1/1     Running   0          10m
istio-cni-node-vk6jd   1/1     Running   0          10m
Copy the code

The cnI plug-in has been installed successfully. Check whether the configuration is appended to the end of the CNI plug-in chain:

$ cat /etc/cni/net.d/10-calico.conflist

{
  "name": "k8s-pod-network"."cniVersion": "0.3.1"."plugins": [{..."type": "istio-cni"."log_level": "info"."kubernetes": {
        "kubeconfig": "/etc/cni/net.d/ZZZ-istio-cni-kubeconfig"."cni_bin_dir": "/opt/cni/bin"."exclude_namespaces": [
          "istio-system"}}]}Copy the code

By default, the istio CNI plug-in monitors pods in all namespaces except the IStio-system namespace. However, this was not sufficient for our needs. A more rigorous approach would be to have the isTIO CNI plug-in ignore at least two namespaces: kube-system and istio-system.

Also very simple, remember the IstioControlPlane API mentioned earlier? You can override the previous configuration directly by creating an IstioControlPlane CRD. Such as:

$ cat cni.yaml

apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
  cni:
    enabled: true
    components:
      namespace: kube-system
  values:
    cni:
      excludeNamespaces:
       - istio-system
       - kube-system
       - monitoring
  unvalidatedValues:
    cni:
      logLevel: info
Copy the code
$ istioctl manifest apply -f cni.yaml
Copy the code

Delete all istio-cni-node pods:

$ kubectl -n kube-system delete pod -l k8s-app=istio-cni-node
Copy the code

Look again at the configuration of the CNI plug-in chain:

$ cat /etc/cni/net.d/10-calico.conflist

{
  "name": "k8s-pod-network"."cniVersion": "0.3.1"."plugins": [{..."type": "istio-cni"."log_level": "info"."kubernetes": {
        "kubeconfig": "/etc/cni/net.d/ZZZ-istio-cni-kubeconfig"."cni_bin_dir": "/opt/cni/bin"."exclude_namespaces": [
          "istio-system"."kube-system"."monitoring"}}]}Copy the code

4. Exposed Dashboard

There is nothing to say about this, just expose it through Ingress Controller, refer to the Istio 1.0 deployment I wrote earlier. (1) : Use Contour to take over the north-south flow of Kubernetes.

Here’s a new way to do it. Istioctl provides a subcommand to open various dashboards locally:

$ istioctl dashboard --help

Access to Istio web UIs

Usage:
  istioctl dashboard [flags]
  istioctl dashboard [command]

Aliases:
  dashboard, dash, d

Available Commands:
  controlz    Open ControlZ web UI
  envoy       Open Envoy admin web UI
  grafana     Open Grafana web UI
  jaeger      Open Jaeger web UI
  kiali       Open Kiali web UI
  prometheus  Open Prometheus web UI
  zipkin      Open Zipkin web UI
Copy the code

For example, to open the Grafana page locally, simply execute the following command:

$ istioctl dashboard grafana
http://localhost:36813
Copy the code

My cluster is not deployed locally, and this command cannot specify the IP to listen to, so it cannot be opened locally with a browser. Kubectl and istioctl binaries can be installed locally, then connect to the cluster via Kubeconfig, and finally execute the above command locally, and then open the page. Windows users forget I said…

5. Exposure of Gateway

To expose the Ingress Gateway, we can run in HostNetwork mode, but you will find that you cannot start the Pod of IngressGateway, because if the Pod is set to HostNetwork=true, Then dnsPolicy will be forced to be converted from ClusterFirst to Default. The Ingress Gateway cannot be started because it needs to connect to other components, such as pilot, through a DNS domain name.

We can solve this problem by forcing the value of dnsPolicy to be ClusterFirstWithHostNet, see Kubernetes advanced DNS guide for details.

The modified IngressGateway Deployment configuration file is as follows:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: istio-ingressgateway
  namespace: istio-system
  .
spec:
  .
  template:
    metadata:
    .
    spec:
      affinity:
        nodeAffinity:
          .
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - 192.168. 04.   # Suppose you want to schedule to this host
      .
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      restartPolicy: Always
      .
Copy the code

We can then access the services in the service grid from the browser through the Gateway URL. I’m going to start a series of lab tutorials. All the steps in this article are to prepare for the rest of the experiment. If you want to follow me for the rest of the experiment, be sure to do the preparation described in this article.

Wechat official account

Scan the following QR code to follow the wechat public account, in the public account reply ◉ plus group ◉ to join our cloud native communication group, and Sun Hongliang, Zhang Curator, Yang Ming and other leaders to discuss cloud native technology