In a previous article on gateways in the Istio Service Grid, I described a simple scenario for exposing the Ingress Gateway. At that time, the scheme was only used for temporary testing and was not suitable for large-scale scenarios. This paper will explore a more optimized scheme to expose the Ingress Gateway.

HostNetwork

The first method is relatively simple. You can run the Ingress Gateway directly in HostNetwork mode. However, you will find that you cannot start Pod for IngressGateway because if Pod is set to HostNetwork=true, dnsPolicy will be forced from ClusterFirst to Default. The Ingress Gateway cannot be started because it needs to connect to other components, such as pilot, through a DNS domain name.

We can solve this problem by forcing the value of dnsPolicy to be ClusterFirstWithHostNet, see Kubernetes advanced DNS guide for details.

The modified IngressGateway Deployment configuration file is as follows:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: istio-ingressgateway
  namespace: istio-system
  .
spec:
  .
  template:
    metadata:
    .
    spec:
      affinity:
        nodeAffinity:
          .
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - 192.168123.248.   Let's say you want to schedule to this host
      .
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      restartPolicy: Always
      .
Copy the code

We can then access the services in the service grid from the browser through the Gateway URL.

However, as the traffic access layer of the service grid, the high reliability of Ingress Gateway is particularly important. The single point of failure is the first problem to be solved for high reliability, and the mode of multi-copy deployment is commonly used. However, the above scheme is only applicable to the case of single instance (the number of copies of Deployment is 1). In order to adapt to the multi-node Deployment architecture, it is necessary to seek a better exposure scheme.

Use an Envoy as a front-end proxy

We already know that the Ingress Gateway actually runs an Envoy proxy inside, so we can solve the high availability problem by adding a layer of proxies in front of the Ingress Gateway. You can expand the number of copies of the Ingress Gateway to multiple, The front-end proxy only needs to connect to the back-end Gateway through the Service Name. In addition, you are advised to deploy front-end proxy services in exclusive node mode to avoid resource contention between service applications and front-end proxy services.

Front-end proxies can use generic load balancing software (such as Haproxy, Nginx, etc.) or can use Envoy. Envoy is recommended because Envoy is the default data plane in the Istio Service Mesh.

The Envoy official provides a set of Envoy use cases from which we only need to use the Dockerfile. First clone Envoy’s repository and go to the examples/front-proxy directory:

$ git clone https://github.com/envoyproxy/envoy
$ cd envoy/examples/front-proxy
Copy the code

Modify the front-enlist. yaml configuration file with the following modifications:

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0. 0. 0
        port_value: 80
    filter_chains:
    - filters:
      - name: envoy.tcp_proxy  1.
        config:
          stat_prefix: ingress_tcp
          cluster: ingressgateway
          access_log:
            - name: envoy.file_access_log
              config:
                path: /dev/stdout
  - address:
      socket_address:
        address: 0.0. 0. 0
        port_value: 443
    filter_chains:
    - filters:
      - name: envoy.tcp_proxy
        config:
          stat_prefix: ingress_tcp
          cluster: ingressgateway_tls
          access_log:
            - name: envoy.file_access_log
              config:
                path: /dev/stdout
  clusters:
  - name: ingressgateway
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    http2_protocol_options: {}
    hosts:
    - socket_address:
        address: istio-ingressgateway.istio-system  2.
        port_value: 80
  - name: ingressgateway_tls
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    http2_protocol_options: {}
    hosts:
    - socket_address:
        address: istio-ingressgateway.istio-system
        port_value: 443
admin:
  access_log_path: "/dev/null"
  address:
    socket_address:
      address: 0.0. 0. 0
      port_value: 8001
Copy the code
  • 1.envoy.tcp_proxyRepresents the name of the filter to be instantiated. The name must match the built-in supported filter, that is, the value of the field cannot be filled in arbitrarily and must use the specified values. hereenvoy.tcp_proxyIndicates that TCP proxy is used. For details:listener.Filter
  • ② istio-ingressgateway. Istio-system Indicates the DNS domain name of Ingress Gateway in the cluster.

For additional configuration analysis, see: Envoy’s architecture and basic terms

Next, build a Docker image with Dockerfile-frontenvoy and front-enbide.yaml. Take a look at the contents of the Dockerfile.

FROM envoyproxy/envoy:latest

RUNapt-get update && apt-get -q install -y \ curlCMD /usr/local/bin/envoy -c /etc/front-envoy.yaml --service-cluster front-proxy
Copy the code

Yaml /etc/ front-enbide. yaml is the local front-enbide. yaml. Kubernetes can be mounted using ConfigMap, so we will also create a ConfigMap:

$ kubectl -n istio-system create cm front-envoy --from-file=front-envoy.yaml
Copy the code

You can either push the built image to a private or public repository, or use the image I’ve uploaded.

Finally, we can deploy the front-end proxy using this image. We need to create a Deployment configuration file with the following contents:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: front-envoy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: front-envoy
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - 192.168123.248. Let's say you want to schedule to this host
      containers:
      - name: front-envoy
        image: yangchuansheng/front-envoy
        ports:
        - containerPort: 80
        volumeMounts:
        - name: front-envoy
          mountPath: /etc/front-envoy.yaml
          subPath: front-envoy.yaml
      hostNetwork: true
      volumes:
        - name: front-envoy
          configMap:
            name: front-envoy
Copy the code

You can replace the image with your own image and deploy it using the YAML file:

$ kubectl -n istio-system create -f front-envoy-deploy.yaml
Copy the code

We can then access the services in the service grid in the browser using the URL of the node where the front-end agent resides.

More generally, we can also configure the high availability of the front-end proxy. For Kubernetes clusters outside of which only one access point is exposed, keepalived can be used to eliminate single-node problems. The implementation is similar to the high availability of Ingress. For details, see the high availability solution of Ingress.