This is the 20th day of my participation in the Genwen Challenge.

1. K8s components

  • configMap
  • kube-apiserver
  • scheduler
  • etcd
  • controller
  • kube-proxy


Kube-proxy is mainly used in K8s to provide Service discovery and load balancing within cluster for Service. It is the load balancer within K8s cluster and also a distributed proxy server, with one on each node of K8s. This design reflects its scalability advantages. The more nodes that need to access the service, the more Kube-Proxies that provide load balancing capability, and the more high-availability nodes. The “ClusterIP” of the K8s Service can be used to LB services within the cluster. If you need to access all the pod applications corresponding to the service outside the cluster, you can use another way of K8s Service to achieve this: “NodePort”.

2. Implement microservice load balancing based on Service

In the Java language, or any other language, there are usually a lot of heavy components that need to be done to implement the SERVICE LB. Examples include Dubbo, SpringCloud, and even SpringCloudAlibaba. Of course, there are Restful apis for Python, Go and other languages, so standard proxy plug-ins will be integrated to do the traditional LB. But for the cloud native era, service containerization makes access to microservices much better. LB provided by K8s Service is load balancing without language margin. It does not need to consider any language barriers. As long as it is a general Restful API, Service can be used to process the LB between micro-services within the cluster.

In a K8s cluster, if accessed internally, you can simply use servicename to access it. Such as:

apiVersion: v1
kind: Service
metadata:
  name: web-server-service
  namespace: default
spec:
  ports:
    - name: web-server
      port: 80
      targetPort: web-server-port
  selector:
    app: web-server
Copy the code

Create a microservice with ClusterIP by default. Then you can access the corresponding backend POD of the service by using selector:

curl http://$service_name.$namespace.svc.cluster.local:$service_port/api/v1/***
Copy the code


Here, K8s virtualizes a cluster IP and uses Kube-proxy to provide service discovery and load balancing within the cluster for service. The function of Kube-Proxy is described above.

Name: web-server-service Namespace: Default Type: ClusterIP IP: 20.16.249.134 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 20.162.35.223:80 Session Affinity: None Events: < None >Copy the code

3. High availability cases

3.1 Traditional microservice request cases

In traditional microservices, there are many microservice architectures constructed in different languages, which are generally directly accessed through HTTP protocol. In the same language, an integration framework pattern will appear to realize the microservice architecture. Such as Dubbo and Springcloud in Java, but its tedious framework structure leads to heavy services.

3.2 Communication of microservices across languages

In THE K8S cluster, through the combination of Kube-Proxy service and some functional components to achieve the invocation between micro-services, whether in the same language or across languages. Will be well handled, including the implementation of high availability, load balancing, service governance and so on.


Any POD in a K8S cluster can access the services of other pods via HTTP:

root@rest-server-ver2-ds-vcfc7:/usr/src/app# curl http://web-server-service.kube-system.svc.cluster.local:80/api/v1/healthz
{
  "status": {
    "code": 0,
    "msg": "success"
  },
  "data": "success"
}root@rest-server-ver2-ds-vcfc7:/usr/src/app#
Copy the code

Some of these permissions can be controlled by namespace, some can be controlled by the access permissions of the service itself, but all can be accessed without language discrimination.