Writing in the front

Here I have drawn a map with XMind to record the study notes of Redis and some interview analysis (the source file has detailed remarks and reference materials for some nodes.Welcome to my official account: AFeng architecture notes background send [map] to get the download link, has been improved and updated) :

preface

The first two introduces the core concepts of K8S, and their respective roles, friends must understand oh. Today, I would like to share the network model of K8S core, which is more complex, but also very important. Let’s start with the basics.

The Node network

This is the most basic network, which is the network communication between each machine Node Node, as well as the basic network of the whole K8S. The operation and maintenance engineer will ensure the intercommunication of each machine Node network.

The WorkNode virtual machine node above can realize network communication through IP+Port, similar to building an internal LAN environment.

Pod network

The minimum unit of K8S is Pod, there will be multiple PODS in each WorkNode node, and a Pod will have multiple containers, so what is their network communication model?

We just need to make sure that each Pod can communicate with each other. The following figure

Let’s take a look at how different containers interact with each other within the same Pod.

The network between different PODS is normal

Eth0, Docker0 and VEth0 are shown in the figure above:

  • Eth0: indicates the network device of a node. That is, nodes communicate with each other through this device.
  • Docker0: virtual bridge, can be understood as a virtual switch. This device is used to communicate between different pods in the same node.
  • Veth0: a virtual nic inside a Pod. It is a network device that connects different containers within a Pod, and its IP address is assigned by Docker0.
  • Each Pod in K8S manages a set of Docker containers, which share the same network namespace
  • Each Docker container in Pod has the same IP and port address space as Pod, and since they are in the same network namespace, they can access each other via localhost. What mechanism allows multiple Docker containers within the same Pod to communicate with each other? In fact, it uses a network model of Docker: – net= Container

The Container mode specifies that a newly created Docker container shares a network namespace with an existing container, rather than with a host. The newly created Docker container does not create its own network card and configure its own IP, but shares IP, port range, and so on with a specified container.

Each Pod container has its own network namespace provided by the system. Using -net =container when starting a Docker container inside a Pod, you can add the current Docker container to the network namespace owned by the Pod.

So it’s important to be able to see the pause container in each Pod.

Network connectivity between different pods on a Node is normal

Docker0 assigns IP addresses to Pod2. Docker0 acts as a virtual bridge between pods on the same node.

The pod IP address space is 172.17.0.0/24. The IP address space of node is 10.100.0.0/24.

How do pods from different nodes communicate with each other?

The POD network between nodes is normal

The figure above illustrates the communication requirements of Pod between different nodes. The IP address space of Node is 10.100.0.0/24. The IP address of Pod is 172.17.0.0/16. The POD IP address is unique in the entire K8S cluster, which is guaranteed by K8S. Node network and Pod network are not in the same network space. How can the two PODS communicate with each other?

When POD1 is given to Pod2, it first encapsulates the packet as the network packet of the node where it is located, then unpacks the packet to the target node, and then to the target Pod. The entire process needs to know which Pod maps to which Node.

Pod1 and Pod2 are not on the same host, Pod addresses are in the same network segment as Docker0, but Docker0 network segment and host Node network adapter are two completely different IP network segments, and communication between different nodes can only be carried out through the host physical network adapter.

Associate the IP address of the Pod with the IP address of the Node where the Pod resides. This association enables the Pod to access each other.

ClusterIP network model for Service

In our previous article, we introduced that a service can have multiple PODS. So when another service requests this service, which one of the pods is requested? See below

We see that the User service started three pods, each with its own PodIP; How do we yellow pod find User service pod? What if you restart the Pod of User service and the IP address changes? What about adding or subtracting pods?

K8S provides ClusterIP network model to solve service discovery. Yellow Pod does not need to know how many PODS there are in User service and the Ip change of Pod. Yellow PODS access the User Service through the ClusterIP of the User Service. The ClusterIP can sense Pod changes of the back-end User Service.

ClusterIP also acts as a load balancer and defaults to a random algorithm.

Registration found

So how does ClusterIP service discovery work?

See service registration and discovery in the figure above, which is very similar to service registration/discovery in a microservice registry. After Pod is instantiated, it is registered to the K8S Master node through Kubelet. The registered information is the ServiceName and ClusterIP relationship, ClusterIP and PodIP relationship.

Kube-proxy and Kube-DNS listen for information on K8S Master. The purpose of kube-DNS is to resolve the ServiceName to which ClusterIP.

When a Consumer Pod accesses a ServiceName, it uses the registration information to find the corresponding ClusterIP and then PodIP.

ClusterIP uses Iptables and IPVS to intercept requests

External Traffic Access

In K8S, pod accesses different PODS through ClusterIP Service.

How does the external network connect to the Pod inside the K8S cluster? The last article introduced a nodeport-style Service.

A Service that defines a NodePort creates this NodePort on all nodes for external access. Each Node has a NodePort on it. So how do you implement load balanced access?

LoadBalancer is a component that needs to be extended for production cloud deployment. Need some cost of oh. NodePort access is all you need to develop your test environment

This external request is accessed via the NodePort on a Load Balancer Node.

Ingress

The above NodePort and LoadBalancer are for a Service; However, there are many such services in our business, and if each Service has to apply for a LoadBalancer, then the cost is too high.

LoadBalancer can support multiple services by purchasing one LoadBalancer. This is where the Ingress component is used.

As can be seen from the figure above, Ingress is essentially a layer 7 reverse proxy that performs routing forwarding, similar to gateway routing forwarding. Forward different paths to different services.

Realize the Ingress there are many ways, such as: Nginx/Kong/Envoy/Zuul/SpringCloudGateway, etc

conclusion

There are many network models in K8S. Let’s take a look at a comparison diagram for the convenience of friends to memorize and understand.

Three things to watch ❤️

If you find this article helpful, I’d like to invite you to do three small favors for me:

  1. Like, forward, have your “like and comment”, is the motivation of my creation.
  2. Follow the public account “ARCHITECTURE Notes of A Wind” and share original knowledge from time to time.
  3. Also look forward to the follow-up article ing🚀
  4. [666] Scan code to obtain the architecture advanced learning materials package