This article focuses on the best practice of using the Kong microservice gateway as the unified entry point for the Kubernetes cluster. I wrote a previous article using the NGINX Ingress Controller as the unified traffic entry point for the cluster: Use Kubernetes Ingress to expose the service to the outside, but compared with Kong Ingress Controller, Kong supports more powerful functions and is more suitable for micro-service architecture:

◾ has a large ecosystem of plugins that can easily extend the functionality Kong supports, such as API authentication, flow control, access restrictions, etc.

The ◾Kong service itself and the Admin administration API are integrated in one process, and the port separates the two, simplifying the complexity of deployment.

The configuration of ◾ KONG node is uniformly persisted to the database, and all nodes share data through the database, so that they can synchronize to each node in real time after Ingress update, while NGINX INGRESS Controller responds to Ingress update through reload mechanism, which is relatively costly. May cause a brief interruption of service;

◾Kong has a mature third-party administration UI and Admin administration API docking, which enables visual management of Kong configuration.

This paper first introduces the architecture of Kong Microservice Gateway in Kubernetes, and then carries on the architecture practice. The topics involved are as follows:

Kong microservice gateway architecture in Kubernetes

Kubernetes simplifies the micro-service architecture, taking Service as the unit and representing each micro-service. However, the whole network of Kubernetes cluster is isolated from the outside world. Generally, a gateway is required to serve as the entrance of all APIs under the micro-service architecture. Architecture of microservices in Kubernetes also requires a gateway, which acts as the unified entry point of the cluster and serves as the interactive middleware for service consumers and providers. Kong can act as a gateway, providing a unified external traffic entry point for the cluster. Communication between services within the cluster can be called by the Service name:

So how does Kong run on the Kubernetes cluster? What is the mechanism?

Kong, as the service access layer, not only provides the receiving and forwarding of external traffic, but also provides the Admin management API itself to realize the routing and forwarding and other relevant configuration of Kong through the Admin management API. These two functions are all realized in one process.

In Kubernetes Kong runs as a node in the form of a Pod, which is managed through Deployment or DaemenSet. All Kong nodes share a database, so all nodes are synchronously aware of changes configured through the Admin API. Since Kong runs in the Kubernetes cluster in the form of POD, it needs to be exposed to the outside so that external traffic can come in. Local services can be provided by Nodeport or Hostnetwork. In the cloud platform, it is generally implemented by LoadBalancer. The general deployment best practice is to separate the Admin management function of Kong into a Pod, which is dedicated to the unified configuration management of all other nodes. It does not provide external traffic forwarding service, but only provides configuration function, while other Kong nodes provide traffic forwarding function exclusively.

Say something about the Kong Ingress Controller: In fact, there is no Kong Ingress Controller enough, the purpose of its existence is to implement the Kubernetes Ingress resource object. We know that Ingress only defines some traffic routing rules, but it is useless to have this routing rule alone. It is necessary for the Ingress Controller to convert these routing rules into the actual configuration of the corresponding agent. For example, the Kong Ingress Controller can convert the Ingress into the Kong configuration. Unlike Nginx Ingress Controller, the Kong Ingress Controller does not provide external services, but only serves as the parsing and transformation service of Kubernetes Ingress resource, which will parse the results of the transformation (Kong’s configuration: For example, the Service, Route entity, etc.) writes to the Kong database with the Kong Admin API, so the Kong Ingress Controller needs to get through with the Kong Admin API. So when we need to configure the route for Kong, we can do this either by creating Kubernetes Ingress or by directly configuring it through the Kong Admin API.

2 Helm deploys KONG

Note: Local cluster deployment. For the convenience of Kong Proxy and Kong Admin, they do not open independently but share the same process. At the same time, traffic forwarding and Admin management API are provided.

Helm official Chart: stable/kong[3]. Since I deployed in the local bare-metal cluster, many cloud functions are not supported, such as LoadBalancer, PV, PVC, etc., so I need to customize the Chart values file to meet the local demand:

1. Since the local bare-metal cluster does not support LoadBalancer, the mode of nodePort is adopted to expose the Kong Proxy and Kong Admin service. The default mode of Chart is nodePort and the port is defined here: Kong proxy nodePort specified ports 80 and 443, Kong Admin designated port 8001: Values. Proxy. HTTP. NodePort: 80 Values. Proxy. TLS. NodePort: 443, Values. Admin. NodePort: 8001;

Note: The default Kubernetes NodePort port range is between 30000 and 32767. Manual assignment of ports outside this range will cause an error! This limit can be adjusted, as described in the previous article, “Kubernetes Adjusts NodePort Port Range”.

2, enable Kong admin and Kong proxy Ingress, deployment will create the corresponding Ingress resources, implementation services foreign visit: Values. Admin. Ingress. Enabled: True, Values. Proxy. Ingress. Enabled: true, I also have to set up foreign access to the domain name (without domain name can literally up a domain name, and then tied the/etc/hosts access) : Values. Admin. Ingress. Hosts: [admin.kong.com], Values. Proxy. Ingress. Hosts: [proxy.kong.com];

3, as an exercise, for convenience, Kong admin to switch to monitor the HTTP port 8001: Values. Admin. UseTLS: false. Values. Admin. ServicePort: 8001,. Values. Admin. ContainerPort: 8001. Also need to change the Pod also probe agreement to HTTP: Values. LivenessProbe. HttpGet. Scheme: HTTP, Values. ReadinessProbe. HttpGet. Scheme: HTTP;

4. Kong Proxy Ingress enables HTTPS, so that Kong can support both HTTP and HTTP proxy in the future. Here is the detailed process:

Create a TLS certificate with the domain name proxy.kong.com:

openssl req -x509 -nodes -days 65536 -newkey rsa:2048 -keyout proxy-kong.key -out proxy-kong.crt -subj “/CN=proxy.kong.com/O=proxy.kong.com”

Create the Kubernetes Secret resource using the generated certificate:

kubectl create secret tls proxy-kong-ssl –key proxy-kong.key –cert proxy-kong.crt -n kong

Edit the values file to enable Kong Proxy Ingress TLS, and reference the Secret created above: values.proxy.ingress.tls:

– hosts: – proxy.kong.com

  secretName: proxy-kong-ssl

5, enable Kong Ingress Controller, the default is not deployed Kong Ingress Controller: ingressController. Enabled: true,

Disable Postgres data persistence during deployment because the local bare-metal environment does not support PV storage: Helm when installation specified — set postgresql. Persistence. Enabled = false, such storage will use emptyDir Postgres way mount, in Pod data will be lost after the restart, local play themselves can do first. Of course, to be more complicated, you can build your own NFS support PV resource object.

Customized values file here: https://raw.githubusercontent…. s.yml

Helm deployment:

helm install stable/kong –name kong –set postgresql.persistence.enabled=false -f https://raw.githubusercontent… –namespace kong

Verify deployment:

[root@master kong]# kubectl get pod -n kong NAME                                   READY   STATUS      RESTARTS   AGE

kong-kong-controller-76d657b78-r6cj7 2/2 Running 1 58s kong-kong-d889cf995-dw7kj 1/1 Running 0 58s kong-kong-init-migrations-c6fml 0/1 Completed 0 58s kong-postgresql-0 1/1 Running 0 58s [root@master kong]# kubectl get ingress -n kong NAME              HOSTS            ADDRESS   PORTS     AGE

kong-kong-admin   admin.kong.com 80 84s kong-kong-proxy   prox.kong.com 80, 443 84s

Curl test:

[root@master Kong]# curl-i HTTP/1.1 200 OK Content-Type: Application/JSON… [root@master kong]# curl http://proxy.kong.com {“message”:”no Route matched with those values”

}

3 the deployment Konga

The whole Kong platform has been run in Kubernetes cluster, and the Kong Ingress Controller has been enabled. However, at present, the Kong-related routing configuration can only be done by using curl to call the Kong Admin API, which is not very convenient to configure. Therefore, it is necessary to deploy Konga, the UI management service for Kong, to the cluster and communicate with Kong, so that the configuration of Kong can be done visually. Since the deployment of Konga is simple and there is no Chart officially available, we create the relevant resources through a YAML file.

In order to save resources, Konga and Kong share one PostgreSQL. Konga and Kong themselves occupy very little database resources, so it makes perfect sense for two similar services to share one database. The following is the Kubernetes resource file, the service external exposure mode is Kong Ingress, the domain name is set as: konga.kong.com:

The database password is in the Secret created by Chart when you installed KONG.

kubectl get secret kong-postgresql -n kong -o yaml | grep password | awk -F ‘:’ ‘{print $2}’ | tr -d ‘ ‘ | base64 -d

konga.yml

apiVersion: extensions/v1beta1

kind: Deployment metadata: labels: app: konga

name: konga

spec: replicas: 1 selector: matchLabels: app: konga

strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: konga

spec: containers: – env: – name: DB_ADAPTER

      value: postgres – name: DB_URI

      value: “postgresql://kong:K9IV9pHTdS@kong-postgresql:5432/konga_database” image: pantsel/konga

    imagePullPolicy: Always name: konga

    ports: – containerPort: 1337 protocol: TCP

  restartPolicy: Always — apiVersion: v1

kind: Service metadata: name: konga

spec: ports: – name: http

port: 1337 targetPort: 1337 protocol: TCP

selector: app: konga — apiVersion: extensions/v1beta1

kind: Ingress metadata: name: konga-ingress

spec: rules: – host: konga.kong.com

http: paths: – path: / backend: serviceName: konga

      servicePort: 1337

Kubectl deploy Konga: kubectl create -f konga. yml-n kong

After the deployment, bind the host and point konga.kong.com to the IP of the cluster node to access:

You can use the ServiceName port number of Kong Admin directly to connect to the Kong Admin address, which is within the cluster:

When the connection is OK, the main page will display global information about KONG:

4. Example: Access Kubernetes Dashboard from Konga configuration

Previously, we exposed Kubernetes Dashboard based on NGINX Ingress Controller. Now, we accessed it based on the Kong platform configuration in the cluster and operated visually through Konga.

Configuring the service for external access through Konga requires only two steps:

1. Create a corresponding Service (not Kubernetes’ Servide, but Kong’s concept of Service: reverse proxy upstream Service abstraction);

2. Create a route corresponding to the Service.

Let’s take the example of configuring the Kubernetes Dashboard service for external access, with the external domain name dashboard.kube.com (whatever the name is, and binding the host access).

CREATE KONG SERVICE:

Create a service route:

Configuration is complete, the browser test visit: https://dashboard.kube.com

 

Source: Distributed Lab