The author | Wang Xi ning Ali cloud senior technical experts

This article is extracted from the book “Istio Service Grid Technology Analysis and Practice” written by Wang Xining, a senior technical expert of Ali Cloud. It describes how to use Istio for multi-cluster deployment management to illustrate the support capability of service grid for multi-cloud environment, multi-cluster or mixed deployment.

Previous details:

  • How to Manage Multiple Cluster Deployment using Istio: VPN Connection Topology on a Single Control Plane
  • How to Manage Multiple Cluster Deployment using Istio: Gateway Connection Topology on a Single Control Plane

In a multi-control plane topology configuration, each Kubernetes cluster has the same Istio control plane installed, and each control plane manages only the service endpoints within its own cluster. Multiple clusters can be configured to form a logically single service grid using the Istio gateway, the common root certificate authority (CA), and the ServiceEntry ServiceEntry. This approach has no special network requirements and is generally considered the easiest approach when there is no common network connection between Kubernetes clusters.

In this topology configuration, Kubernetes communication across the cluster requires bidirectional TLS connections between services. To enable bidirectional TLS communication between clusters, the Citadel in each cluster will be configured with an intermediate CA certificate generated by the shared root CA, as shown in the figure.

Deployment control plane

Generate intermediate CA certificates for each cluster Citadel from the shared root CA, which enables bidirectional TLS communication across different clusters. For illustration purposes, we used the sample root CA certificate provided in the Istio installation under the samples/certs directory for both clusters. In a real deployment, you might use different CA certificates for each cluster, all signed by the common root CA.

Perform the following steps in each Kubernetes cluster to deploy the same Istio control plane configuration across all clusters.

  1. Create the Kubernetes key for the generated CA certificate using the following command, as shown below:
kubectl create namespace istio-system kubectl create secret generic cacerts -n istio-system \ --from-file=samples/certs/ca-cert.pem \ --from-file=samples/certs/ca-key.pem \ --from-file=samples/certs/root-cert.pem \  --from-file=samples/certs/cert-chain.pemCopy the code
  1. Install Istio’s CRDS and wait a few seconds to submit them to the Kubernetes API server, as follows:
for
i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i;
done
Copy the code
  1. Deploy the Istio control plane: If helm dependencies are missing or not up to date, they can be updated via helm DEP Update. Note that since istio-cni is not used, you can temporarily remove it from the dependency requirements.yaml before performing the update operation. Run the following commands:
helm
template install/kubernetes/helm/istio --name istio --namespace istio-system \
  -f
install/kubernetes/helm/istio/values-istio-multicluster-gateways.yaml >
./istio.yaml
kubectl
apply -f ./istio.yaml
Copy the code

Ensure that the above steps are performed successfully in each Kubernetes cluster. Of course, the istio.yaml command generated by helm is executed once.

Set the DNS

Providing DNS resolution for services in a remote cluster allows existing applications to run without modification, since applications typically expect to resolve the service by its DNS name and access the resulting IP address. Istio itself does not use DNS to route requests between services. Services in the same Kubernetes cluster share the same DNS suffix (for example, svc.cluster.local). Kubernetes DNS provides DNS resolution capabilities for these services. To provide a similar setup for services in a remote cluster, the services in a remote cluster are treated as.. The format name of global.

The Istio installation package ships with a CoreDNS server that will provide DNS resolution capabilities for these services. To take advantage of this DNS resolution capability, you need to configure Kubernetes’ DNS service to point to the CoreDNS service. The CoreDNS service will act as the DNS server for the.global DNS domain.

For a cluster that uses Kube-DNS, create the following configuration items or update existing ones:

kubectl
apply -f - <<EOF
apiVersion:
v1
kind:
ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
data:
  stubDomains: |
    {"global": ["$(kubectl get
svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]}
EOF
Copy the code

For clusters that use CoreDNS, create the following configuration items or update existing ones:

kubectl
apply -f - <<EOF
apiVersion:
v1
kind:
ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa
ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        reload
        loadbalance
    }
    global:53 {
        errors
        cache 30
        proxy . $(kubectl get svc -n
istio-system istiocoredns -o jsonpath={.
          spec.clusterIP})
    }
EOF
Copy the code

Deploying the sample application

To demonstrate cross-cluster access, deploy the Sleep application service in one Kubernetes cluster, deploy the Httpbin application service in the second cluster, and verify that the Sleep application can invoke the remote cluster’s Httpbin service.

  1. To deploy the sleep service in cluster1, run the following command:
kubectl
create namespace app1
kubectl
label namespace app1 istio-injection=enabled
kubectl
apply -n app1 -f samples/sleep/sleep.yaml
export
SLEEP_POD=$(kubectl get -n app1 pod -lapp=sleep -o jsonpath={.items.. metadata.name})Copy the code
  1. To deploy the httpbin service to the second cluster2, run the following command:
kubectl
create namespace app2
kubectl
label namespace app2 istio-injection=enabled
kubectl
apply -n app2 -f samples/httpbin/httpbin.yaml
Copy the code
  1. Obtain the gateway address of cluster2 as follows:
export
CLUSTER2_GW_ADDR=$(kubectl get svc --selector=app=istio-ingressgateway \
  -n istio-system -o
jsonpath="{.items[0].status.loadBalancer.ingress[0].ip}")
Copy the code
  1. To enable the service sleep in cluster1 to access the service httpbin in cluster2, we need to create a ServiceEntry resource for the service httpbin in cluster1. The hostname of the ServiceEntry ServiceEntry should be.. Globalname, where name and namespace correspond to the name and namespace of the remote service in cluster2 respectively.

For DNS resolution of services in the *. Global domain, you need to assign an IP address to these services and ensure that each service in the. GlobalDNS domain has a unique IP address in the cluster. These IP addresses are not routable outside the POD. In this example, we will use network segment 127.255.0.0/16 to avoid conflicts with other IP addresses. Application traffic from these IP addresses will be captured by the Sidecar proxy and routed to other remote services as appropriate.

To create ServiceEntry for the Httpbin service in Cluster1, run the following command:

kubectl
apply -n app1 -f - <<EOF
apiVersion:
networking.istio.io/v1alpha3
kind:
ServiceEntry
metadata:
  name: httpbin-app2
spec:
  hosts:
  # must be of form name.namespace.global
  - httpbin.app2.global
  # Treat remote cluster services as part of
the service mesh
  # as all clusters in the service mesh share
the same root of trust.
  location: MESH_INTERNAL
  ports:
  - name: http1
    number: 8000
    protocol: http
  resolution: DNS
  addresses:
  # the IP address to which httpbin.bar.global
will resolve to
  # must be unique for each remote service,
within a given cluster.
  # This address need not be routable. Traffic
for this IP will be captured
  # by the sidecar and routed appropriately.
  - 127.255.0.2
  endpoints:
  # This is the routable address of the ingress
gateway in cluster2 that
  # sits in front of sleep.bar service. Traffic
from the sidecar will be
  # routed to this address.
  - address: ${CLUSTER2_GW_ADDR}
    ports:
      http1: 15443 # Do not change this port
value
EOF
Copy the code

The above configuration will cause all traffic in cluster1 that accesses httpbin.app2.global, including traffic to any port that accesses it, to be routed to endpoint 15443 with two-way TLS connections enabled.

The gateway for port 15443 is a special SNI-aware Envoy proxy that was preconfigured and installed as part of the multi-cluster Istio installation step in the beginning section above. Traffic entering port 15443 will be load balanced in pods of the appropriate internal service in the target cluster.

Run the following command in cluster1 to view the istiocoreDNS container. You can see that the domain name mapping of ServiceEntry has been loaded:

export
ISTIO_COREDNS=$(kubectl get -n istio-system po -lapp=istiocoredns -o jsonpath={.items.. metadata.name}) kubectl logs --tail 2 -n istio-system${ISTIO_COREDNS} -c istio-coredns-plugin
Copy the code

The following information is displayed:

  1. Verify that the sleep service in cluster1 can invoke the httpbin service in cluster2. Run the following command in cluster1:
kubectl
exec $SLEEP_POD -n app1 -c sleep -- curl httpbin.app2.global:8000/headers
Copy the code

The following information is displayed:

So far, cluster1 and cluster2 are connected in the multi-control plane configuration.

Versioning across clusters

From previous articles, we have seen that many of Istio’s features, such as basic version routing, can be easily implemented on a single Kubernetes cluster. In many real business scenarios, microservices-based applications are not that simple, but require services to be distributed and run across clusters in multiple locations. The question then becomes, can Istio’s capabilities be as simple to run in these truly complex environments?

Let’s take a look at an example to see how Istio’s traffic management capabilities work in a multi-cluster grid with multiple control plane topologies.

  1. First, deploy the helloWorld service of version V1 to the first cluster1 and execute the following command:
kubectl
create namespace hello
kubectl
label namespace hello istio-injection=enabled
kubectl
apply -n hello -f samples/sleep/sleep.yaml
kubectl
apply -n hello -f samples/helloworld/service.yaml
kubectl
apply -n hello -f samples/helloworld/helloworld.yaml -l version=v1
Copy the code
  1. To deploy the helloworld service of versions v2 and v3 to the second cluster2, run the following command:
kubectl
create namespace hello
kubectl
label namespace hello istio-injection=enabled
kubectl
apply -n hello -f samples/helloworld/service.yaml
kubectl
apply -n hello -f samples/helloworld/helloworld.yaml -l version=v2
kubectl
apply -n hello -f samples/helloworld/helloworld.yaml -l version=v3
Copy the code
  1. As described in the previous section, you need to use the DNS name suffix.global to access remote services on multiple control planes.

In our case, it is helloworld.hello.global, so we need to create the ServiceEntry ServiceEntry and the DestinationRule in cluster1. The ServiceEntry ServiceEntry will use cluster2’s entry gateway as the endpoint address to access the service.

Create ServiceEntry and DestinationRule for the HelloWorld service in cluster cluster1 by using the following command:

kubectl
apply -n hello -f- <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: helloworld spec: hosts: - helloworld.hello.global location: MESH_INTERNAL ports: - name: http1 number: 5000 protocol: http resolution: DNS addresses: -127.255.0.8 endpoints: - address:${CLUSTER2_GW_ADDR}
    labels:
      cluster: cluster2
    ports:
      http1: 15443 # Do not change this port
value
---
apiVersion:
networking.istio.io/v1alpha3
kind:
DestinationRule
metadata:
  name: helloworld-global
spec:
  host: helloworld.hello.global
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
  subsets:
  - name: v2
    labels:
      cluster: cluster2
  - name: v3
    labels:
      cluster: cluster2
EOF
Copy the code
  1. Create target rules on both clusters. To create a target rule for subset V1 in cluster1, run the following command:
kubectl
apply -n hello -f - <<EOF
apiVersion:
networking.istio.io/v1alpha3
kind:
DestinationRule
metadata:
  name: helloworld
spec:
  host: helloworld.hello.svc.cluster.local
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
  subsets:
  - name: v1
    labels:
      version: v1
EOF
Copy the code

To create target rules for subsets v2 and v3 in cluster2, run the following command:

kubectl
apply -n hello -f - <<EOF
apiVersion:
networking.istio.io/v1alpha3
kind:
DestinationRule
metadata:
  name: helloworld
spec:
  host: helloworld.hello.svc.cluster.local
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
  subsets:
  - name: v2
    labels:
      version: v2
  - name: v3
    labels:
      version: v3
EOF
Copy the code
  1. Create a virtual service to route traffic.

Applying the virtual service below will direct the traffic requests from user Jason to HelloWorld to version V2 and V3 in Cluster2, where the proportion of V2 is 70% and v3 is 30%. Traffic requests to HelloWorld from any other users will go to version V1 located in cluster1:

kubectl
apply -n hello -f - <<EOF
apiVersion:
networking.istio.io/v1alpha3
kind:
VirtualService
metadata:
  name: helloworld
spec:
  hosts:
    - helloworld.hello.svc.cluster.local
    - helloworld.hello.global
  http:
  - match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: helloworld.hello.global
        subset: v2
      weight: 70
    - destination:
        host: helloworld.hello.global
        subset: v3
      weight: 30
  - route:
    - destination:
        host:
helloworld.hello.svc.cluster.local
        subset: v1
EOF
Copy the code

After multiple calls are made, the following results show that the rules for routing take effect. This shows that the rules for routing in the multi-control plane topology are defined in the same way as those used in the local cluster:

The easiest way to set up a multi-cluster grid is to use a multi-control plane topology because it has no special network requirements. As you can see from the examples above, the routing functionality that runs on a single Kubernetes cluster can just as easily run across multiple clusters.

** Istio Service Grid Technology Analysis and Practice readers can experience ASM products for free to learn! * * click know ASM:www.aliyun.com/product/servicemesh ali cloud service grid products

Author’s brief introduction

Wang Xining, ali Cloud senior technical expert, ali Cloud service grid product ASM and Istio on Kubernetes technical director, focusing on Kubernetes, cloud native, service grid and other fields. He worked in IBM China Development Center and served as the chairman of patent Technology Review Committee. He has more than 40 international technology patents in related fields. The book “Istio Service Grid Analysis and Practice” is written by him, which introduces the basic principles and development practice of Istio in detail, including a large number of selected cases and reference codes can be downloaded, which can quickly start Istio development. Gartner believes that by 2020 service grid will be the standard technology for all leading container management systems. This book is suitable for all readers who are interested in microservices and cloud native, and we recommend you to read this book in depth.

Course recommended

In order for more developers to enjoy the dividends brought by Serverless, this time, we gathered 10+ Technical experts in the field of Serverless from Alibaba to create the most suitable Serverless open course for developers to learn and use immediately. Easily embrace the new paradigm of cloud computing – Serverless.

Click to free courses: developer.aliyun.com/learning/ro…

“Alibaba Cloud originator focuses on micro-service, Serverless, container, Service Mesh and other technical fields, focuses on the trend of cloud native popular technology, large-scale implementation of cloud native practice, and becomes the public account that most understands cloud native developers.”