Introduction: In the practice and deployment of Kubernetes, in order to solve the problems of Pod migration, Node Pod port, dynamic domain name allocation, etc., developers need to choose the appropriate Ingress solution. With so many Ingress products on the market, how can developers distinguish their strengths and weaknesses? And how to combine their own technology stack to choose the appropriate technology solution? In this article, Tencent cloud middleware core r&d engineer Li Hui will introduce how to select the technology of Kubernates Ingress controller for you. (Edit: Middleware Q sister)

Noun explanation

Familiarize yourself with the following basic concepts:

  • Cluster: A collection of cloud resources required by containers, including several cloud servers and load balancers.
  • Instance (Pod) : An instance consists of one or more related containers that share the same storage and network space.
  • Workload (Node) : Kubernetes resource object that manages the creation, scheduling, and automatic control of Pod replicas throughout their life cycle.
  • Service: A microservice that consists of multiple instances (pods) of the same configuration and the rules that access them.
  • Ingress: The Ingress is a set of rules for routing external HTTP (S) traffic to a Service.

Kubernetes access status

External access to Kubernetes

In Kubernetes, services and Pod IP are mainly used for services to access within the cluster, and are not visible to applications outside the cluster. How to solve this problem? To enable external applications to access services in the Kubernetes cluster, the usual solution is NodePort and LoadBalancer.

Both schemes have some disadvantages:

  • The downside of NodePort is that only one Service can be mounted on a port, and an additional load balancer is required for higher availability.
  • The drawback of LoadBalancer is that each service must have its own IP, either internal or external. More often than not, to ensure the capacity of LoadBalancer, you need to rely on cloud providers.

In the practice and deployment of Kubernetes, the Ingress solution was created to solve problems such as Pod migration, Node Pod port, domain name dynamic allocation, or Pod background address dynamic update.

Selection of the Ingress

Disadvantages of Nginx Ingress

Ingress is a very important gateway for extranet traffic in Kubernetes. The recommended default value in Kubernetes is Nginx Ingress, which I will call Kubernetes Ingress in order to distinguish it from the commercial version of Ingress provided by Nginx.

Kubernetes Ingress, as the name indicates, is based on the platform of Nginx. Nginx is now the most popular Nginx HTTP server in the world. It also has the advantage that Nginx Ingress requires very little configuration to access the Kubernetes cluster, and there is a lot of documentation to guide you on how to use it. This makes Nginx Ingress a great choice for most people who are new to Kubernetes or startups.

However, a number of problems arise when Nginx Ingress is used in some environments:

  • First problem: Nginx Ingress uses some OpenResty features, but the final configuration load depends on the original Nginx config reload. When the route configuration is very large, Nginx reload takes a long time (several seconds or even more than ten seconds), which seriously affects or even interrupts services.
  • Second problem: Plugin development for Nginx Ingress is very difficult. If you decide that the Nginx Ingress plugins themselves are not enough and you need to use some custom plugins, this additional development task can be very painful for the programmer. Because Nginx Ingress itself has very poor plug-in capabilities and scalability.

Ingress selection principles

Since Nginx Ingress has a lot of problems, would you consider choosing another open source Ingress that is better for use? Kubernetes Ingress there are at least a dozen more Ingress in the market than Kubernetes Ingress, so how to choose a suitable one from so many Ingress?

Ingress is ultimately based on HTTP gateways, of which there are several on the market: Nginx, Golang’s native gateways, and the emerging Envoy. But each developer has a different technology stack, so the appropriate Ingress will be different.

So how do we choose a more user-friendly Ingress? Or to narrow it down a bit, which Ingress should a developer familiar with Nginx or OpenResty choose?

Here are some of my experiences with Ingress controller selection.

The basic characteristics of

First of all, I think the Ingress controller should have the following basic functions. If it does not have these functions, it can pass directly.

  • Must be open source, not open source can not be used.
  • Pod changes very frequently in Kubernetes, and service discovery is very important.
  • Now that HTTPS is very popular, TLS or SSL capabilities are very important, such as certificate management capabilities.
  • Support common protocols such as WebSocket and, in some cases, HTTP2 and QUIC.

Based on the software

As mentioned earlier, not everyone is good at the same technology platform, so it is important to choose the HTTP gateway that you are more familiar with. Examples include Nginx, HAProxy, Envoy, or Golang native gateway. Because you are familiar with its principle, in use can achieve fast landing.

High performance is an important feature in a production environment, but even more important is high availability. This means that the gateway you choose must have a very strong availability and stability, so that the service can be stable.

The functional requirements

Regardless of the above two points, it is the company’s business for the special requirements of the gateway. If you choose an open source product, it better work out of the box. For example, if you need the GRPC protocol conversion capability, then of course you want to choose a gateway with such functionality. Here’s a brief list of factors that affect your choice:

  • Protocol: whether to support HTTP2, HTTP3;
  • Load balancing algorithm: Whether the basic WRR and consistent hash load balancing algorithms can meet the requirements, or whether a more complex EWMA load balancing algorithm is required.
  • Authentication permission flow: Only simple authentication or more advanced authentication is required. Or need integration, can quickly develop authentication functions like Tencent cloud IM. Kubernetes Ingress in addition to the problems mentioned above that Nginx takes too long to reload and plug-ins have poor ability to expand, it also has the problem that the ability to adjust the weight of the backend node is not flexible enough.

Select APISIX as Ingress Controller

I personally prefer APISIX to Kubernetes Ingress. Although it has fewer features than Kong, APISIX’s routing capabilities, flexible plug-in capabilities, and high performance make up for some of the shortcomings in Ingress selection. For Nginx or Openresty developers who are not satisfied with the current Ingress, I recommend using APISIX as the Ingress.

How to use APISIX as Ingress? We first make a distinction between Ingress, which is a definition of the Kubernetes name or rule definition, and Ingress Controller, which is a component that synchronizes the Kubernetes cluster state to the gateway. But APISIX itself is just an API gateway. How to implement APISIX as Ingress Controller? Let’s start with a brief overview of how to implement Ingress.

Implementing Ingress is essentially two parts:

  • Part one: You need to synchronize the configuration in the Kubernetes cluster, or the state in the Kubernetes cluster, to the APISIX cluster.
  • Part two: You need to define some of the concepts in APISIX, such as services, upstreams, etc. as CRDS in Kubernetes.

If the second part is implemented, APISIX can be generated quickly with Kubernetes Ingress configuration. The Apisix-specific configuration can be generated through the APISIX Ingress Controller. In order to quickly implement APISIX as a Kubernetes-capable Ingress, we have created an open source project called Ingress Controller.

Above is the overall architecture of the Ingress Controller project. On the left is the Kubernetes cluster, where you can import yamL files to make changes to the Kubernetes configuration. On the right is the APISIX cluster, with its control surface and data surface. As you can see from the architecture diagram, APISIX Ingress acts as a connector between the Kubernetes cluster and the APISIX cluster. It is mainly responsible for listening to the changes of nodes in the Kubernetes cluster and synchronizing the state of the cluster to the APISIX cluster. In addition, as Kubernetes advocates that all components should have the characteristics of high availability, so at the beginning of the design of APISIX Ingress, we ensure the high availability of APISIX Ingress Controller through the mode of two-node or multi-node.

conclusion

Let’s take a look at the pros and cons of APISIX Ingress compared to the popular Ingress controller on the market. Above is a table made by foreign developers for the Selection of Kubernetes Ingress. Based on the original table, I added the functions of APISIX Ingress with my own understanding. As you can see, APISIX is on the far left, followed by Kubernetes Ingress and Kong Ingress, Traefik is based on Golang Ingress. HAproxy is a fairly common and used to be a popular load balancer. Istio and Ambassador are two very popular Ingress in foreign countries.

Next, we summarize the advantages and disadvantages of each Ingress:

  • APISIX Ingress: As mentioned earlier, APISIX Ingress has very powerful routing capabilities, flexible plug-in expansion capabilities, and excellent performance. At the same time, its disadvantages are also obvious. Although APISIX has a lot of features after open source, there is no implementation case, no relevant documentation to guide people how to use these features.
  • Kubernetes Ingress: Kubernetes recommends using the default Nginx Ingress. Its main advantages are simplicity and easy access. The downside is that Nginx reload takes a long time. In addition, although there are many plug-ins available, plug-in extensibility is very weak.
  • Nginx Ingress: The main advantage of Nginx Ingress is that it fully supports TCP and UDP, but lacks other functions such as authentication and traffic scheduling.
  • Kong: It’s an API gateway in its own right, and it kind of pioneered the introduction of API gateways into Kubernetes as Ingress. In addition, compared with edge gateway, Kong does very well in authentication, traffic limiting, gray scale deployment and other aspects. Kong Ingress also has the advantage of providing some apis and service definitions that can be abstracted into Kubernetes CRD and synchronized to the Kong cluster through Kubernetes Ingress configuration. The downside is that deployment is extremely difficult and it pales in comparison to APISIX in terms of high availability.
  • Traefik: Golang-based Ingress, which itself is a microservice gateway, is widely used in Ingress scenarios. Its main platform is based on Golang, it supports a lot of its own protocols, and it doesn’t have any disadvantages in general. Golang is also recommended if you are familiar with it.
  • HAproxy: is a well-known load balancer. Its main advantage is that it has a very strong load balancing ability, other aspects are not advantageous.
  • Istio Ingress and Ambassador Ingress are both based on the very popular Envoy. To be honest, I don’t think there are any drawbacks to either Ingress. Perhaps the only drawback is that they are based on the Envoy platform, which is unfamiliar to most of us and requires a high threshold of entry.

To sum up, after understanding the advantages and disadvantages of each Ingress, we can quickly choose an Ingress that suits us according to our own situation.

The authors introduce

Li Hui, Tencent Cloud middleware API gateway core R&D member, Apache APISIX PPMC. Love open source, willing to share, active in Apache APISIX community.

Please scan our wechat official number and look forward to meeting you