The author | alibaba technologists stream constant

First, sources of demand

Why is service discovery needed

In a K8s cluster, applications are deployed via POD. Unlike traditional application deployment, which is deployed on a given machine, we know how to call the IP address of another machine. However, applications in K8s clusters are deployed via POD, and the POD life cycle is short-lived. During the pod’s life cycle, such as when it is created or destroyed, its IP address changes so that it cannot be deployed in a traditional way and cannot be assigned IP addresses to access specific applications.

In addition, in the application deployment of K8s, although we have learned the deployment mode of deployment before, we still need to create a POD group, and then these POD groups need to provide a unified access point, and how to control the flow load balance into this group. For example, a test environment, a pre-release environment, and an online environment need to maintain the same deployment template and access mode during deployment. This is because you can use the same set of application templates to publish directly in different environments.

Service: Service discovery and load balancing in Kubernetes

Finally, application services need to be exposed to external access and provided to external users to invoke. As we learned in the last section, the POD network is not in the same segment as the machine network. How do you expose the POD network to external access? This is where service discovery is needed.





In K8s, Service discovery and load balancing is K8s Service. The figure above shows the architecture of Service in K8s. K8s Service provides access to external networks and POD networks. That is, external networks can be accessed through Service, and pod networks can also be accessed through K8s Service.

Down, K8s connects to another group of POD, that is, it can load balance to a group of POD through THE way of K8s Service, which is equivalent to solving the recurrence problem mentioned above, or provides a unified access to Service discovery, and then can be accessed by external network. Resolve the access between different PODS, providing a unified access address.

Second, use case interpretation

How to declare pod K8s service and how to use it?

The Service of grammar





Let’s look at the syntax of K8s Service. This is actually a declaration structure of K8s. There is a lot of syntax in this structure, and it has a lot in common with some of the standard objects in K8s introduced earlier. For example, a label to make some selections, a selector to make some selections, a label to declare some of its label labels, and so on.

A new point here is to define a protocol and port for K8s Service discovery. Continuing with this template, we declare a K8s service named my-Service, which has a label of App: my-Service and selects the POD of app:MyApp as its back end.

Finally, we define the protocol and port for service discovery. In this example, we define TCP with port 80 and destination port 9376. The effect is that access to the service 80 port is routed to the targetPort on the back end. Any access to this service 80 port is load balanced to the back-end app: MyApp, the pod of the label, port 9376.

Create and view the Service

How to create the declared service object and what it looks like after it is created? With a simple command:

  • kubectl apply -f service.yaml
Or is it

  • kubectl created -f service.yaml
The command above simply creates such a service. Once created, you can use:

  • kubectl discribe service
To see a result after the service is created.





Once the service is created, you can see that its name is my-service. Namespaces, labels, and selectors are all the same as we declared before, and an IP address is generated. This IP address is the IP address of the service, and this IP address can be accessed by other pods in the cluster. This IP address provides unified access to a POD and service discovery.

We also have an Endpoints property, which we can see by Endpoints: which pods are selected by the selector we declared earlier? And what is the status of these PODS? For example, with selector, we see that it selects an IP of these pods and a port of the targetPort that these pods declare.





The actual architecture is shown in the figure above. After a service is created, it creates a virtual IP address and port in the cluster. All pods and Nodes in the cluster can use this IP address and port to access the service. The service will mount the POD of its choice and its IP address to the back end. This allows load balancing to be performed on the back-end pods when accessed via the SERVICE’s IP address.

When the pod life cycle changes, such as when one of the pods is destroyed, the Service automatically removes the pod from the back end. This works: Even if the pod’s life cycle changes, the endpoint it accesses does not change.

Access Service within the cluster

How do other pods in the cluster access the service we created? There are three ways:

  • First, we can access the virtual IP of the service, such as the my-service we just created. Kubectl get SVC or kubectl discribe service you can see that its virtual IP address is 172.29.3.27 and port number is 80. This virtual IP and port can then be used to access the address of the service directly inside the POD.
  • The second way to directly access the service name, which depends on DNS resolution, is that the pod in the same namespace can directly access the service name. In different namespaces, we can add “by service name. , then add a service in which the namespace to access the service, we use curl to directly access, for example, is my – service: 80 can access to this service.
  • The third way is through environment variables. When the POD in the same namespace is started, K8s will put some IP addresses, ports and some simple configurations of the service into THE POD of K8s through environment variables. After K8s pod’s container is started, it reads an address configured by other services in the namespace, or its port number, etc., by reading system environment variables. In a pod of cluster, for example, can be directly through the curl $to get the value of an environment variable, such as to MY_SERVICE_SERVICE_HOST is it an IP address, MY_SERVICE is just our MY_SERVICE statement, The SERVICE_PORT is its port number, so you can also request the MY_SERVICE service in the cluster.

Headless Service

A special form of service is Headless service. When creating a service, you can specify clusterIP:None to tell K8s that you don’t need a clusterIP. Then K8s will not assign a virtual IP address to the service. It does not have a virtual IP address how to achieve load balancing and unified access?

It works like this: The pod can be directly resolved to the IP address of all back-end pods by using service_name. The POD can be resolved to the IP address of all back-end pods by using DNS A record. The client selects A back-end IP address. This A record changes over the pod’s lifetime, and the list of returned A records changes, requiring the client application to return all DNS from A record to the IP addresses in the LIST of A records. The client itself chooses an appropriate address to access the POD.





You can see the difference between the above template and the one we declared earlier by adding a clusterIP:None in the middle, indicating that no virtual IP is required. When a cluster pod accesses a my-service, it will directly resolve the IP address of all the services corresponding to the POD and return it to the POD. Then the POD will select an IP address to access the service directly.

Expose the Service outside the cluster

A node or pod can access a service from a cluster. How to expose the application to the public network? There are also two types of service to solve this problem, a NodePort and a LoadBalancer.

  • NodePort is used to expose a port on a cluster node, which is equivalent to accessing a port on a node and then forwarding it to a virtual IP address. Is the virtual IP address of the service on the host.

  • The LoadBalancer type is a LoadBalancer with a port on top of each node in the cluster and a LoadBalancer with a port on top of each node. For example, if an SLB is mounted on Ali Cloud, the load balancer will provide a unified portal and load balance all traffic it touches to the Node pod of each cluster node. Node Pods are then converted to ClusterIP, which accesses the actual PODS.

Three, operation demonstration

The following is a practical demonstration of how to use K8s Service in the container Service of Ali Cloud.

Create a Service

We have created a container cluster of Ali Cloud, and then configured a connection between the local terminal and ali cloud container cluster.

First of all, through Kubectl get CS, we can see that we have normally connected to ali Cloud container service cluster.





Today, I will use K8s Service to experience Ali cloud Service through these templates. There are three templates, the first is client, which is used to simulate the service accessing K8s through service, and then load balancing on a set of pods declared in our service.





K8s Service load balancing K8s Service load balancing K8s Service load balancing Then the selector selects some pod of the run:nginx tag as its back end.





Then create a set of pods with tags like this. How do you create a POD? From deployment we can easily create a set of pods and declare run:nginx as a label with two copies that will run both pods at the same time.





Start by creating a set of pods, namely the K8s Deployment, through kubectl create -f service.yaml. The Deployment has been created. Let’s see if the POD has been created. As you can see in the figure below, the two pods created by this Deployment are already running. Kubectl get pod -o wide to see the IP address. Filter with -l, or label, run=nginx. As you can see from the following figure, the two pods are 10.0.0.135 and 10.0.0.12 respectively, and they are labeled with run=nginx.





Create a K8s service to select the two pods. The service is already created.





As described earlier, you can see an actual state of the service through kubectl Describe SVC. Run =nginx; run=nginx; run=nginx; Here you can see that K8s has generated a virtual IP address in the cluster for it, and through this virtual IP address, it can load balance the next two pods.





Yaml: Kubectl get Pod: Kubectl get Pod: Kubectl get Pod: Kubectl get Pod: Kubectl get Pod: Kubectl get Pod





Kubectl exec = kubectl exec = kubectl exec = Kubectl exec = Kubectl exec = Kubectl There is no curl in this POD. Test it by typing in the IP address wget. You can see that access to the actual IP address can be accessed on the backend of nginx through this virtual unified portal.





The second way to access the service is by calling the service name. Using wget to access the service name nginx, we can see the same result as we just saw.





You can also access the service by adding a namespace name to different namespaces, for example, the namespace is default.





Env can also be accessed via environment variables. In this pod, you can directly execute env to see what environment variables it actually injected. Take a look at the various configurations of nginx’s service that have been registered.





We can also access such an environment variable through wget, and then we can access one of our services.





Having introduced these three access methods, let’s look at how to access them from a network outside of a service. We vim directly modify some of the services we just created.





Finally, we add a type, LoadBalancer, which is the external access method we described earlier.





Then kubectl apply applies the changes directly to the service you created.





Now what happens to the service? Kubectl get svC-o wide: kubectl get svC-O wide: kubectl get svC-O wide: kubectl get svC-O wide: kubectl get svC-O wide: kubectl get svC-O wide It’s a virtual IP address in the cluster.





Now let’s actually access the external IP address 39.98.21.187 to get a feel for how to expose our application service through service. Click on the terminal directly. Here we can see that we can access the service directly through the application’s external access endpoint. Isn’t that easy?





Finally, service discovery is implemented in K8s. The access address of the service is independent of the pod lifecycle. Let’s take a look at the current service and select these two POD IP addresses.





We now delete one of the pods and delete the previous pod using kubectl delete.





We know that Deployment will have it automatically generate a new POD, and now the IP address is 137.





The IP address of the cluster has not changed. The IP address of the LoadBalancer has not changed. In all cases without affecting client access, a POD IP from the back end is automatically placed in the service back end.





This means that application components can be called without having to care about a change in the pod’s life cycle.

That’s all the demo.

4. Architecture design

Finally, a simple analysis of K8s design and some principles of implementation.

Kubernetes service discovery architecture





As shown in the figure above, K8s Service discovery and K8s Service are an integral part of this architecture.

K8s is divided into master node and worker node:

  • Master is mainly K8s control content;
  • Inside the Worker node is the place where the user’s application actually runs.
In the K8s master node, there is APIServer, which is the unified management of all objects of K8s. All components will register to APIServer to monitor the changes of this object, such as the changes of pod life cycle of the component we just saw, and these events.

There are three key components:

  • One is the Cloud Controller Manager, which is responsible for configuring a LoadBalancer for external access.
  • The other one is Coredns, which is used to observe a change of service backend POD in APIServer, to configure the DNS resolution of service. The virtual IP of a service can be accessed directly by the name of the service, or the IP list of a Headless service can be resolved.
  • Within each node there is a component called Kube-Proxy, which listens for service and POD changes and then actually configures an access to node pods or virtual IP addresses in the cluster.
What does the actual access link look like? For example, accessing a Service from Pod3, a Client within the cluster, is similar to the effect just demonstrated. Pod3 first resolves ServiceIP from Coredns. Coredns returns ServiceName with the corresponding service IP. The Client Pod3 makes a request to the Service IP, which is then intercepted by kube-proxy’s iptables or IPVS. Load balancing is then performed on each of the actual back-end pods, thus achieving a load balancing and service discovery.

For external traffic, for example, a request that is accessed through the public network. It is a load balancer configured to listen for service changes through an external load balancer Cloud Controller Manager, and then forwarded to a NodePort on the node. NodePort also converts NodePort traffic to ClusterIP through an iptables configured by Kube-Proxy, and then to the IP address of a pod on the backend for load balancing and service discovery. This is the entire K8s Service discovery and the overall structure of the K8s Service.

Subsequent advanced

In the further part, we will further explain the implementation principle of K8s Service and how to diagnose and repair the Service network problems.

This paper summarizes

This is the end of the main content of this article, here is a simple summary for you:

  1. Why do cloud-native scenarios require service discovery and load balancing,
  2. How to use Kubernetes Service for Service discovery and load balancing
  3. Kubernetes cluster Service involved in the components and roughly the implementation principle
I believe that through the study and grasp of this article, we can use Kubernetes Service to quickly and standard orchestration of complex enterprise applications.







The original link

This article is the original content of the cloud habitat community, shall not be reproduced without permission.