Continue the above in-depth understanding of K8S network principle

Service applications are visible inside the K8s cluster, but the applications we publish need to be accessible from the external network or even the public network. How can K8s expose its internal services?Copy the code

Layer 4 network Only Node network can communicate with external network now the question is how can layer 2 Service network be exposed through layer 0 Node network?Copy the code

Another question needs to be considered. In the K8S service discovery schematic, which component knows all the information about the Service network, can communicate with the POD network, and can communicate with the node network? And that's kube-proxy, the exposure service is implemented through this component and all you need to do is have Kube-Proxy expose a listening port on the node so NodePort comes into playCopy the code

NodePort

After k8S is released, NodePort will be opened on each node. Behind this port is kube-proxy when external traffic wants to access k8S service NodePort is accessed and forwarded via kube-Proxy to the internal Service abstraction layer and then to the target PodCopy the code

LoadBalancer LoadBalancer

If ali cloud has a K8S environment and service type is set to LoadBalancer, Ali cloud K8S will automatically create a NodePort for port forwarding and apply for an SLB with an independent public IP address, which will also be automatically mapped to the NodePort of K8S cluster The production environment can access the NodePort inside the K8s cluster through the public IP address exposed by SLB. However, in the development and test environment, NodePort can be directly accessed. If you expose one service you need to buy one LB+IP if you expose 10 services you need to buy 10 LB+IP so the cost is high is there a way to buy one LB+IP can you expose more services? That's where Ingress comes in, which is to deploy a separate reverse proxy service inside K8s to do proxy forwardingCopy the code

Ingress

Ingress is a special service that is exposed through node 80/443. Ingress can be forwarded to the Service abstraction layer by path or domain name and then to Pod by setting up the routing table for forwarding Kind is set to ingress. The ingress provides layer 7 reverse proxy. If layer 4 services are exposed, advanced functions such as security authentication, monitoring, traffic limiting, and certificates are required Ingress can purchase a LB+IP that exposes multiple services in a K8S clusterCopy the code

The local environment wants to develop debugging methods quickly

kubectl proxy

Kubectl proxy is used to create a proxy service on the host. This proxy service can access any HTTP service in the K8S cluster. The API server on the master can access the service in the K8S cluster indirectly because the master knows the information of all services in the cluster This approach is limited to layer 7 HTTP forwardingCopy the code

kubectl Port-Forwarding

Enable a forwarding port on the local machine to indirectly forward to a POD port in k8S. This mode supports HTTP forwarding and TCP forwardingCopy the code

kubectl exec

Use this command to connect directly to the POD to execute commandsCopy the code

summary

Understand Kube-proxy in depth

Kube-proxy implements "Netfilter" and "iptables" indirectly through two mechanisms provided by the Linux kernel. Netfilter is a hook method supported by the Linux kernel that allows other modules of the kernel to register callback methods that can intercept network packets and change their destination routing. Iptables is a set of user-space programs The iptables program can check, forward, modify, redirect, or discard IP network packets. Iptables is a Netfilter user-space interface that indirectly operates routing rules in NetfilterCopy the code

Kube-proxy can use iptabels program to manipulate the routing rules in the Netfilter of the kernel space and the Netfilter can intercept the underlying IP network packets and modify their routingCopy the code

Kube-proxy working mode

  • User-space agent mode
Most of the network functions, including setting packet routing rules and load balancing, are performed directly by kube-Proxy running in user space, which listens for requests to execute routing and load balancing to forward requests to the target POD. In this mode, Kube-Proxy also needs to switch frequently between user space and kernel space Because it needs to interact with Iptables to implement load balancingCopy the code

Kube-proxy listens for the creation, update, and deletion events of the master service. It also listens for the addresses of the endpoints corresponding to these services. If the POD IP changes, Kube-Proxy will also synchronize this change Kube-proxy creates a random port on the node, such as 10400 on 10.100.0.2, through which the target request can be forwarded to the corresponding endpoint (POD) For example, 10.104.14.67:80 can be used to forward the request to 10.100.0.2:10400 This request is intercepted by netfilter and forwarded to 10.100.0.2:10400, which is the port kube-proxy is listening to. 6. Kube-proxy accepts this request Forwarding to POD through load balancing Step 1-3 is the service discovery phase step 4-6 is the operation phase the request is forwarded to port 10400. Kube-proxy first switches to the kernel to accept the request packet and then switches to user space for load balancing invocation. Due to frequent context switching, this mode is not ideal in performance So iptabels was introducedCopy the code

The iptables mode

Kube-proxy uses iptables to set forwarding rules for clusterIP when a new service is created Kube-proxy has high performance, but iptables does not support advanced load balancing policies, nor does it support automatic failure retry mechanism. Generally, ready probes are required for coordination. This mode is only suitable for small and medium sized K8S clusters, not suitable for large K8S clusters Assuming a cluster of 5000 nodes with 2000 services and 10 pods per service, about 20,000 records need to be synchronized on each node. At the same time in the cloud environment, back-end POD IP can change at any time, which is a huge overhead for the Linux kernel. IPVS was introduced to support larger K8S clusters The Proxy patternCopy the code

IPVS Proxy mode

This mode is a virtualization construction technology supported by the Linux kernel. It is based on NetFilter and is designed for high performance load balancing at the kernel transport layer. It is also a major component of LVS and supports not only the default Round Robbon(weighted polling) but also the minimum connection and target source hash Load balancing algorithm The use of efficient hash algorithms to store network routing rules can significantly reduce the synchronization overhead of iptables and greatly increase the scale of the cluster. Kube Proxy creates and synchronizes IPVS rules by calling the Netfilter interface. The actual routing, forwarding and load balancing are IPVS 'responsibility IPVS is the most efficient, extensible, and configurableCopy the code

summary

  • The user-space agent mode is obsolete
  • The Iptables mode applies to small – and medium-sized K8S clusters
  • IPVS mode production uses large K8s clusters with complex configurations