An overview of the

The most widely used implementation of the open source Ingress Controller is Nginx Ingress, which is powerful and has high performance. Nginx Ingress can be deployed in many different ways. This article will introduce some deployment solutions of Nginx Ingress on TKE, their principles, advantages and disadvantages, and some suggestions for selection and use.

Nginx Ingress is introduced

Before introducing how to deploy Nginx Ingress, let’s take a quick look at what Nginx Ingress is.

Nginx Ingress is an implementation of Kubernetes Ingress. It converts Ingress rules into Nginx configurations through the Ingress resource of watch Kubernetes cluster. Then let Nginx do layer 7 traffic forwarding:

There are actually two implementations of Nginx Ingress:

  1. Github.com/kubernetes/…
  2. Github.com/nginxinc/ku…

The first is the Kubernetes open source community implementation, the second is the official implementation of Nginx, we usually use the Kubernetes community implementation, which is the focus of this article.

TKE Nginx Ingress deployment solution

So how to deploy Nginx Ingress on TKE? There are three solutions. The following describes them and their deployment methods.

Scenario 1: Deployment + LB

The simplest way to deploy Nginx Ingress on TKE is to deploy the Nginx Ingress Controller in Deployment mode. Create a LoadBalancer Service(either automatically creating a CLB or binding to an existing CLB), so that the CLB receives external traffic and forwards it to the Nginx Ingress:

The default implementation of LoadBalancer Service on TKE is based on NodePort. CLB binds the NodePort of each node as the back-end RS to forward traffic to the NodePort of each node. Requests are then routed via Iptables or IPVS to the Service’s corresponding back-end Pod, which is the Pod of the Nginx Ingress Controller. If a node is added or deleted, the CLB automatically updates the binding of the node NodePort.

This is the simplest way and can be installed directly with the following command:

kubectl create ns nginx-ingresskubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-deployment.yaml -n nginx-ingress
Copy the code

Scheme 2: Daemonset + HostNetwork + LB

Scheme 1 is simple, but the traffic will pass through a layer of NodePort and will be forwarded by another layer. This approach has some disadvantages:

  1. The forwarding path is long. Once the traffic reaches the NodePort, it passes through Kubernetes internal load balancer and is forwarded to Nginx through Iptables or IPVS, which increases the network time.
  2. SNAT is bound to occur through the NodePort. If the traffic is too concentrated, the source port may be exhausted or the Conntrack insertion conflict may lead to packet loss, and some traffic anomalies may occur.
  3. Nodeports of each node also act as a load balancer. If A CLB is bound to nodeports of a large number of nodes, the load balancing status will be scattered on each node, which may lead to global load imbalance.
  4. CLB performs health probes on the NodePort, and probes are eventually forwarded to the Pod of Nginx Ingress. If there are too many CLB bound nodes and too few PODS of Nginx Ingress, probes will cause great pressure on Nginx Ingress.

Nginx Ingress can use hostNetwork and CLB to bind node IP + port (80,443). Due to the use of hostNetwork, pods of Nginx Ingress cannot be scheduled to the same node to avoid port listening conflicts. The usual practice is to plan in advance, select some nodes as edge nodes for deployment of Nginx Ingress, label these nodes, and then deploy Nginx Ingress on these nodes with DaemonSet. Here is the architecture diagram:

Installation steps:

  1. Label kubectl label node 10.0.0.3 nginx-ingress=true(note the replacement node name).

  2. Deploy Nginx Ingress on these nodes:

    kubectl create ns nginx-ingresskubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-daemonset-hostne twork.yaml -n nginx-ingressCopy the code
  3. Manually create the CLB, create TCP listeners for ports 80 and 443, and bind ports 80 and 443 on those nodes where the Nginx Ingress is deployed, respectively.

Solution 3: Deployment + LB Passthrough Pod

Although scheme 2 has some advantages compared with Scheme 1, it also introduces the operation and maintenance costs of manual maintenance of CLB and Nginx Ingress nodes, which need to be planned in advance. Nginx Ingress nodes need to be manually bound and unbound on the CLB console, and cannot be automatically scaled or expanded. If your network mode is Vpc-CNI, then all the pods of the elastic network adapter can be directly bound to the Pod. The elastic network adapter can bypass the NodePort, and does not need to manually manage the CLB.

If your network mode is Global Router(most clusters use this mode), you can enable VPC-CNI for the cluster. You can enable vPC-CNI on the cluster information page:

After ensuring that the cluster supports VPC-CNI, you can use the following command to install Nginx Ingress:

kubectl create ns nginx-ingresskubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-deployment-eni.y aml -n nginx-ingressCopy the code

Recommended deployment solution selection

Nginx Ingress deploys three different solutions on TKE. The advantages and disadvantages of each solution are also discussed. Here is a brief summary and some suggestions for selection:

  1. Scenario 1 is relatively simple and generic, but may have some performance problems in large-scale and high-concurrency scenarios. If the performance requirements are less stringent, consider using this solution.
  2. Scheme 2 uses hostNetwork with good performance, but requires manual maintenance of CLB and Nginx Ingress nodes, and cannot realize automatic capacity expansion and contraction. Therefore, this scheme is not recommended.
  3. Scheme 3 has good performance and does not require manual maintenance of CLB, so it is the most ideal scheme. It requires the cluster to support VPC-CNI. If your cluster uses the VPC-CNI network plug-in or the Global Router network plug-in and has VPC-CNI support enabled (both modes are mixed), you are advised to use this solution directly.

Q&A

How to support Intranet Ingress?

Solution 2 Because the CLB is manually managed, you can use the public network or Intranet to create the CLB. Solution 1 and solution 3 create public CLB by default. If you want to use Intranet, you can deploy YAML instead. To nginx ingress – controller the Service plus a key for the Service. The kubernetes. IO/qcloud – loadbalancer – internal – subnetid, Value is the annotation of the subnet ID created by the Intranet CLB. Example:

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxxx # value 替换为集群所在 vpc 的其中一个子网 id
  labels:
    app: nginx-ingress
    component: controller
  name: nginx-ingress-controller
Copy the code

How to reuse an existing LB?

By default, new CLBS are automatically created in schemes 1 and 3. The IP address of the Ingress traffic inlet depends on the IP address of the newly created CLB. If the service is dependent on the inbound IP address, for example, DNS resolution is configured to the previous CLB IP address, do not switch IP addresses. Or if you want to use a CLB with a monthly monthly package (the default creation is pay-as-you-go), you can also have Nginx Ingress bind to an existing CLB.

Operation method is also modified to deploy yaml, give nginx ingress – controller the Service plus a key for the Service. The kubernetes. IO/tke – existed – lbid, Example of an annotation with value as CLB ID:

apiVersion: v1 kind: Service metadata: annotations: service.kubernetes.io/tke-existed-lbid: Lb-6swtxxxx # value replace CLB ID labels: app: nginx-ingress Component: controller name: nginx-ingress-controllerCopy the code

What is the public network bandwidth of Nginx Ingress?

What is the public network bandwidth of my Nginx Ingress? Can I support the concurrency of my service?

Here need to popularize, Tencent cloud accounts have bandwidth up and non-bandwidth up two types:

  1. Non-bandwidth upshift: Bandwidth is managed on the cloud host (CVM).
  2. Bandwidth up: Indicates that the bandwidth is moved to the CLB or IP address for management.

The public bandwidth of the Nginx Ingress is the sum of the bandwidths of the TKE nodes bound to the CLB if the Nginx Ingress uses a public CLB. If scheme 3 is used, CLB passthrough Pod, that is, CLB is not a TKE node but an elastic network adapter, then the public bandwidth of Nginx Ingress is the sum of bandwidnesses of all nodes to which Nginx Ingress Controller Pods are scheduled.

Nginx Ingress bandwidth is the same as that of the CLB you purchased. The default is 10Mbps (pay-as-you go), which you can adjust as needed.

Due to historical reasons, most accounts registered in the past are of non-bandwidth upmigration type. Refer to the Tencent Cloud account type to distinguish your own account type.

How to create Ingress?

Production support for Nginx Ingress is not yet complete, so if you are deploying Nginx Ingress on TKE and want to use Nginx Ingress to manage Ingress, It is not currently available on the TKE console (web page). It is created using YAML and requires an Ingress Class annotation for each Ingress. Example:

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: Kubernetes. IO/ingress. Class: nginx # here is key spec: rules: - the host: HTTP: * paths: - path: / backend: serviceName: nginx-v1 servicePort: 80Copy the code

How to monitor?

The Nginx Ingress installed by the above method has exposed the Metrics port, which can be picked up by Prometheus. If prometry-operator is installed in the cluster, the following ServiceMonitor can be used to collect monitoring data for Nginx Ingress:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: nginx-ingress-controller
  namespace: nginx-ingress
  labels:
    app: nginx-ingress
    component: controller
spec:
  endpoints:
  - port: metrics
    interval: 10s
  namespaceSelector:
    matchNames:
    - nginx-ingress
  selector:
    matchLabels:
      app: nginx-ingress
      component: controller
Copy the code

Here’s an example of a native Prometheus configuration:

- job_name: nginx-ingress scrape_interval: 5s kubernetes_sd_configs: - role: endpoints namespaces: names: - nginx-ingress relabel_configs: - action: keep source_labels: - __meta_kubernetes_service_label_app - __meta_kubernetes_service_label_component regex: nginx-ingress; controller - action: keep source_labels: - __meta_kubernetes_endpoint_port_name regex: metricsCopy the code

Now that we have the data, let’s configure grafana with a panel to display the data. The Nginx Ingress community provides a panel: github.com/kubernetes/…

We import the panel directly by copying the JSON and importing it into Grafana. Nginx. json is a panel that shows the various general monitoring functions of nginx Ingress:

Request-handlings-performance. json is a monitoring panel that shows the performance aspects of Nginx Ingress:

conclusion

This article reviews the three solutions for Nginx Ingress deployment on TKE as well as many practical suggestions. It is a good reference for students who want to use Nginx Ingress on TKE. Because of the high demand for Nginx Ingress, we are also working on productization support for Nginx Ingress, with one-click deployment, integrated logging and monitoring capabilities, and performance optimization. In the near future, we will be able to use Nginx Ingress more easily and efficiently on TKE, so stay tuned!

The resources

  1. TKE Service YAML example: cloud.tencent.com/document/pr…
  2. TKE Service use the existing CLB: cloud.tencent.com/document/pr…
  3. Accounts of tencent cloud types: cloud.tencent.com/document/pr…

[Tencent cloud native] cloud said new, cloud research new technology, cloud travel new live, cloud appreciation information, scan code to pay attention to the public account of the same name, timely access to more dry goods!!