preface

We must build up the environment, see is not to solve the problem, must be actual combat. For the time being, there are still two articles missing from Pod. After the introduction of Service and storage, the remaining two articles will be added. Stateful Pod will involve these two articles.

Kubernetes series:
  1. Kubernetes is introduced
  2. Kubernetes environment setup
  3. Kubernetes – kubectl is introduced
  4. Introduce Kubernetes – Pod (-)
  5. Kubernetes-pod introduction (2)- Life cycle
  6. Kubernetes-pod introduction (3) -POD scheduling
  7. Introduction to Kubernetes-POD (4)-Deployment

Why Service is needed

Create an Nginx Pod collection in the application, consisting of three pods, each container port number is 80;

  1. Edit nginx – deployment. Yaml;
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: backend
  replicas: 3
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        resources:
          limits:
            memory: "128Mi"
            cpu: "128m"
        ports:
        - containerPort: 80
Copy the code
  1. Create Deployment resources;
kubectl apply -f nginx-deployment.yaml
Copy the code
  1. View the IP address of the Pod.
kubectl get pods -o wide
Copy the code

  1. Access Pod through IP;
The curl 10.100.1.92:80Copy the code

This is problematic because the Pod life is limited and the IP may change if the Pod is restarted. If our service writes the Pod IP address to death, the back-end service will also be unavailable when the Pod IP address changes. Of course we can change the back-end service IP by manually updating the upstream configuration such as nginx. We can also register our service with the service discovery center via Consul, ZooKeeper, etcd, etc., and let these tools dynamically update the configuration of Nginx without manual operation.

Kubernetes provides Service objects that define a logical set of pods and a policy for accessing them, a concept very similar to microservices. The set of pods contained under a Serivce is typically determined by Label Selector. In this way, we don’t care how the Pod changes on the back end. We only need to specify the address of the Service, because we added a layer of Service discovery middleware in the middle. After the Pod is destroyed or restarted, the address of the Pod will be registered with the Service discovery center. This abstraction of services helps us achieve this decoupling.

On the Principle of Service

Using nginx as an example, we create a Service to load three pods:

  1. There are two ways to create a Service, one is through the kubectl expose command to create, the other is through the yaml file to create, here we use the yaml way to create, create a nginx-service.
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  # Backend = app=backend
  selector:
    app: backend
  ports:
  # service port number
  - port: 80
    # Port number of pod
    targetPort: 80
Copy the code
  1. Create a Service object;
kubectl apply -f nginx-service.yaml
Copy the code
  1. View the IP address of the Service.
kubectl get svc
Copy the code
  1. Access through ServiceIP and port number.
The curl 10.96.165.211:80Copy the code

  1. View the list of Service endpoints.
kubectl describe svc nginx-service
Copy the code

Endpoint is a resource object in Kubernetes. It is stored in Etcd and used to record the access addresses of all pods corresponding to a Service. The Endpoint Controller automatically creates the corresponding Endpoint object only when the Service is configured with Selector. Otherwise, no Endpoint object is created. The Endpoint Controller provides the following functions:

  1. The controller responsible for generating and maintaining all endpoint objects
  2. Monitor the changes of Service and corresponding Pod. When the Service is deleted, delete the Endpoint object with the same name as the Service. When a new Service is created, obtain the Pod list based on the information about the new Service, and then create the corresponding Endpoint object. When the Service is monitored to be updated, obtain the related Pod list according to the updated Service information, and then update the corresponding Endpoint object. If a Pod event is heard, the Endpoint object of the corresponding Service is updated and the Pod IP is recorded in the Eendpoint.

Endpoint completes Service discovery, and the load balancing from Service IP to back-end Pod is implemented by Kube-proxy on each Node. Kube-proxy listens for updates of Service and Endpoints and invokes its proxy module to refresh routing and forwarding rules on the host. This is the first generation of Kube-Proxy implementation, now the implementation has been adjusted, but I think this is the easiest way to make people understand.

The following is the implementation of the second or third generation.

Service Load Balancing

Service routing and forwarding are implemented by the Kube-proxy component. Service only exists in the form of ClusterIP. Kube-proxy forwards Service access requests to multiple Pod instances on the backend. The routing and forwarding rules of Kube-proxy are implemented by the proxy module at the back end. The proxy module of Kube-Proxy currently has three implementations:

userspace

In userspace mode, the Kube-proxy process is a real TCP/UDP proxy that forwards access traffic from Service to Pod, as shown below:

In userspace mode, Kube-proxy dynamically updates iptables rules by tracking Service and Endpoint changes in real time through the Watch interface of API Server. For each Service, it opens a port for its Node Node as its Service proxy port. Requests sent to this port are forwarded to the corresponding back-end Pod entity of the service using a certain policy. Kube-proxy also sets up iptables rules on the local node, which capture Service requests through Cluter IP and Port and forward them to the corresponding Port. If DNS is used, there is also a DNS resolution layer.

The biggest problem with userspace is that the Service requests go from userspace to kernel iptables, and then back to userspace. Finally, kube-proxy does the selection of back-end Endpoints and proxy work. The system overhead is high.

iptables

Kubernetes has used iptables as the default mode of Kube-proxy since version 1.2. Kube-proxy in iptables mode no longer functions as a proxy. Its core functions are: The Watch interface of THE API Server tracks Service and Endpoint changes in real time and updates the corresponding Iptables rules. The request traffic of the Client is directly routed to the target Pod through the NAT mechanism of iptables.

Compared to the 1st generation userspace mode, the Iptables mode works entirely in kernel mode and does not need to go through the user-mode Kube-proxy, thus providing better performance. The Iptables pattern is simple to implement, but it suffers from an unavoidable drawback: As services and pods proliferate in the cluster, the rules in iptables swell rapidly, resulting in significant performance degradation and, in extreme cases, loss of rules that are difficult to reproduce and troubleshoot.

IPVS

Kubernetes introduced the 3rd generation IPVS (IPVirtualServer) mode from version 1.8. IPVS was upgraded to GA stable version in Kubernetes1.11. Although iptables and IPVS are both implemented based on Netfilter, they are fundamentally different because of their positioning. Iptables is designed for firewalls. IPVS is designed for high performance load balancing and uses more efficient data structures (Hash tables) that allow almost unlimited scaling. This solves the problem of Iptables.

IPVS has the following obvious advantages over Iptables:

  1. Better scalability and performance for large clusters;

  2. Support for more complex copy balancing algorithms than Iptables (minimum load, minimum connections, weights, etc.)

  3. Supports server health check and connection retry.

  4. The set of ipsets can be dynamically modified, even if the iptables rules are using the set;

Because IPVS does not provide packet filtering, airpin-masqueradetricks, SNAT and so on, it is used in conjunction with Iptables in certain scenarios (such as NodePort implementations).

In IPVS mode, kube-Proxy made another important upgrade to use an extended ipset of Iptables instead of calling iptables directly to generate rule chains. The Iptables rule chain is a linear data structure, whereas ipset introduces an indexed data structure, making it very efficient to find and match rules when there are many.

Sticky session mechanism

Service supports session persistence based on the client IP address by setting sessionAffinity. For the first time, requests from a client IP address are forwarded to a back-end Pod, and subsequent requests from the same client IP address are forwarded to the same Pod. Configuration parameters for the spec. SessionAffinity, also can set the session remains for the longest time, and after that to set up the access rules, by configuring the spec. SessionAffinityConfig. ClientIP. TimeoutSeconds, You can refer to the following configuration:

Service supports session persistence based on the client IP address by setting sessionAffinity. For the first time, requests from a client IP address are forwarded to a back-end Pod, and subsequent requests from the same client IP address are forwarded to the same Pod. Configuration parameters for the spec. SessionAffinity, also can set the session remains for the longest time, and after that to set up the access rules, by configuring the spec. SessionAffinityConfig. ClientIP. TimeoutSeconds, You can refer to the following configuration:

  apiVersion: v1
  kind: Service
  metadata:
    name: nginx-service
  spec:
    sessionAffinity: ClientIP
    sessionAffinityConfig:
      clientIP:
        timeoutSeconds: 1000
    # Backend = app=backend
    selector:
      app: backend
    ports:
    # service port number
    - port: 80
      # Port number of pod
      targetPort: 80
Copy the code

The Service type

By default, there are four types of Service: ClusterIP, NodePort, LoadBalancer, and ExternelName. The following describes the application scenarios of each type of Service.

ClusterIP

ClusterIP Service is the default Service exposure mode of Kubernetes cluster. It can only be used for intra-cluster communication and can be accessed by each Pod. You can also manually specify the ClusterIP. To ensure that this IP address is within the ClusterIP scope of the Kubernetes cluster Settings and is not used by other services, the entire access process is as follows:

The overall structure of the cluster can be referred to the following models:

NodePort

For us, it’s not all in-cluster access, it’s also out-of-cluster business access. Then ClusterIP can’t. NodePort is a method of providing external access. A Service of NodePort type can keep the same Port on each node of the Kubemetes cluster. External access connections first access the node IP:Port and then forward these connections to the corresponding Pod of the Service. The overall access process:

The overall structure of the cluster can be referred to the following models:

LoadBalancer

A LoadBalancer Service is an extension of a NodePort Service. The Service is accessed through a specific LoadBalancer Service that forwards requests to the NodePort of the node. Nginx load balancer for ports.

LoadBalancer is not a component of Kubernetes itself. This part is usually provided by a specific vendor (cloud service provider). Different vendors’ Kubernetes clusters have different implementations of LoadBalancer. The supplied loadBalancer information will be published via the status.loadBalancer field of the Service.

The overall structure can be referred to the following model:

ExternalName

The ExternalName Service is used to define the external Service as the Kubernetes cluster Service. The ExternalName field specifies the external Service address, which can be a domain name or AN IP address. Clients inside the cluster can access external services through this Service. This type of Service does not have a back-end Pod and therefore does not need to set a Label Selector.

The overall structure can be referred to the following model:

The end of the

Welcome everyone little attention, little praise!