• Service Resource Overview
    • Creating a Service Resource
    • Request a Service from the Service object
    • Service session stickiness
    • Service discovery
    • Services available
  • Ingress and Ingress Controller
    • Ingress resources
    • Ingress controller

Service Resource Overview

Service is one of the core resource types of Kubernetes. It defines a logical set of multiple Pod objects and a policy for accessing this set of PODS through rules. After the interruption or expansion of Pod objects managed by controllers such as Deployment, the IP address of Pod combination will change. After the introduction of Service resource, a group of Pod can be defined as a logical combination based on label selector, and request to Pod objects in the group can be scheduled through its OWN IP address and port agent. It hides from the client the real Pod resources that handle the user’s requests, making the client’s requests look as if they were handled and responded to directly by the Service. And the Service is loosely coupled to the Pod object through a label selector, which can be created before the Pod object without error.

Creating a Service Resource

Basic configuration list of Service resources:

apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
spec:
  selector:
    app: myapp  
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
Copy the code

Service resource Myapp-svc is associated with Pod objects labeled “app= myApp” through the label selector. It automatically creates Endpoints resource objects named Myapp-svc and automatically configits a ClusterIP. The exposed port is specified by the port field. The port of each back-end Pod object is given by targetPort. Check the status of service and endpoint respectively:

kubectl get svc myapp-svc
kubectl get endpoints myapp-svc
Copy the code

Request a Service from the Service object

The default type of the Service resource is ClusterIP, which can only receive access requests from clients in Pod objects in the cluster. So for testing purposes, a dedicated Pod object is created that uses its interactive interface to access service resources.

kubectl run cirros-$RANDOM --rm -it --image=cirros -- sh
Copy the code

Launched a Pod to run CirrOS containers, a Linux mini-distribution designed for testing cloud computing environments with HTTP client tools such as Curl. Access ClusterIP:Port from the container’s interactive interface

/ # curl http://10.105.246.145:80 Hello MyApp | Version: v1 | < a href = "hostname. HTML" > Pod Name < / a >Copy the code

The default Kubernetes cluster Service proxy mode is iptables and uses a random scheduling algorithm, so the Service will randomly schedule client requests to one of the backend Pod resources associated with it.

Service session stickiness

Service resources also support Session affinity, which consistently forwards requests from the same client to the same Pod object on the back end, for things like the need to keep some private information based on the client’s identity and track the user’s activities based on that private information. The Service resources through the spec. SessionAffinity and spec. SessionAffinityConfig configuration sticky session two fields:

  • The sessionAffinity field defines the type of sticky session to use and supports only “None” and “ClientIP” attribute values. ‰
    • None: sessionAffinity is not used. The default value. ‰
    • ClientIP: identifies the client based on the ClientIP address and always schedules requests from the same source IP address to the same Pod object.
  • SessionAffinityConfig Is used to set the session retention duration. The available duration ranges from 1 to 86400. The default duration is 10800 seconds

However, the Session affinity mechanism of Service resources can only identify clients based on their IP addresses. It identifies all clients that perform source address translation through the same NAT server as the same client, and the scheduling granularity is coarse and the effect is poor. Therefore, this method is not recommended for sticky sessions in practice.

Service discovery

A Service provides a stable access point for a Service application in a Pod, but how an application in a Pod client knows the IP address and port of a particular Service resource requires the Service discovery mechanism. The basic realization of service discovery mechanism is generally to deploy a service registry (also known as service bus) with stable network location in advance, and the service provider (server) registers its location information with the registry and updates it in time after changes. The service consumer periodically retrieves the latest location information of the service provider from the registry to “discover” the target service resource to access. Service discovery can be done in K8S based on CoreDNS, or even using simple environment variables.

Services available

By default, the IP address of the Service is reachable only within the cluster. To access the Service from outside the cluster, the Service must be exposed first.

The type of Service

There are four types of Service, such as ClusterIP, NodePort, LoadBalancer and ExternalName

  1. ClusterIP: exposes services using an internal ClusterIP address. This IP address is accessible only within the cluster and cannot be accessed by clients outside the cluster.
  2. NodePort: Built on the ClusterIP type, it exposes the service on a static port (NodePort) of each Node’s IP address. NodePort routes to ClusterIP. NodePort is a Port selected from the IP address of the working node to forward user requests from outside the cluster to the ClusterIP and Port of the target Service. This type of Service can be accessed by Pod inside the cluster like ClusterIP. Requests from external cluster clients via socket NodeIP:NodePort are also received;
  3. LoadBalancer: Built on the NodePort type, it exposes the Service to the external cluster through the LoadBalancer provided by the cloud provider. The LoadBalancer Service points to a LoadBalancer device associated with the external Kubernetes cluster. The device sends request traffic to the cluster through the NodePort on the working node. The advantage of this Service is that it can schedule requests from external clients to the nodeports of all nodes (or some nodes) instead of relying on the clients to decide which node to connect to. In this way, the service is not available due to the failure of the node specified by the client.
  4. ExternalName: Exposes the Service by mapping the Service to a host name specified by the content of the ExternalName field, which needs to be resolved by the DNS Service to a record of type CNAME. This type does not define the Service provided by Kubernetes cluster, but a way to map a Service outside the cluster to the cluster in the form of DNS CNAME records, so that the Pod resources in the cluster can access the external Service. This type of Service has no ClusterIP and NodePort, and no label selector for selecting Pod resources.
NodePort

Nodeport-type Service resources with a configuration list similar to the default ClusterIP type above, except that type is explicitly specified under spec: NodePort In addition, you can specify Node ports, but the value must be between 30,000 and 32767. You are advised to use ports automatically allocated by the cluster to avoid port conflicts.

LoadBalancer

Service resources of the NodePort type can be accessed from outside the cluster, but external clients must know the IP address of the NodePort and at least one node in the cluster in advance. When a selected node fails, clients must choose to request access to other nodes, and sometimes nodes are not allowed to be accessed by outsiders. The LoadBalancer type schedules request traffic to the NodePort of each node. To create a LoadBalancer type, specify type: LoadBalancer,

ExternalName

A Service resource of type ExternalName is used to publish services outside the cluster to the cluster for applications in pods to access. Type: ExternalName and ExternalName need to be specified, for example, ExternalName: IO, the externalName attribute defines a CNAME record that returns the alias of the external host that actually provides the service, and then retrieves the IP address of the host from the CNAME record value.

spec:
  type: ExternalName
  externalName: redis.ilinux.io
Copy the code

After the serviceName is used to access the service, the ClusterDNS resolves the name in the spec.externalName field in CNAME format to the name in the spec.externalName field, and the DNS resolves the name to the IP address of the host.

Ingress and Ingress Controller

In Kubernetes, the IP addresses of Service resources and Pod resources can only be used for communication within the cluster network, and all network traffic cannot penetrate the border router to achieve communication within and outside the cluster. Kubernetes provides two built-in cloud loadbalancing mechanisms for publishing public applications:

  • Service resources, whether iptables or IPVS model, are configured on Netfilter in Linux kernel for layer 4 scheduling. They are a more general type of scheduler. Supports the scheduling of application layer services such as HTTP and MySQL.

However, working at the transport layer, it has limitations, such as no URL-based request scheduling mechanism, and Kubernetes does not support configuring any type of health check for it.

  • The other is Ingress resource, which implements “HTTP(S) load balancer”, one of the application layer load balancing mechanisms. It provides functions such as customizable URL mapping and TLS uninstallation, and supports various types of back-end server health status check mechanisms.

Ingress is one of the standard resource types of Kubernetes API. It is actually a set of rules that forwards requests to specified Service resources based on DNS name (host) or URL path, and is used to forward request traffic outside the cluster to complete Service publishing inside the cluster. It is only a set of routing rules. The Ingress controller matches and routes request traffic according to these rules.

Ingress resources

Ingress resources are forwarding rules based on HTTP virtual hosts or URLS. Example of how to define an Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations: 
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: www.ilinux.io
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: myapp-svc
              port: 
                number: 80


Copy the code

The following three fields are nested under the spec field

  • Rules, which defines the list of forwarding rules for the current Ingress resource; If no rules are defined by rules or no rules are matched, all traffic is forwarded to the default backend defined by Backend. ‰
  • Backend, the default backend for serving requests that do not match any rules; When defining Ingress resources, you should define at least one of backend or rules. This field is used to let the load balancer specify a global default backend. The backend object consists of two mandatory embedded fields: serviceName and servicePort, which specify the name and port of the backend target Service resource for traffic forwarding. ‰
  • Currently, only the default port 443 can be used to provide services. If you want to configure the specified list members to point to different hosts, you must support this through the SNI TLS extension mechanism.
Ingress Resource type
Single-service resource Ingress

A single Service can be exposed using NodePort, LoadBalancer, or Ingress. You only need to specify “Default Backend” for Ingress. In this way, the Ingress controller assigns an IP address to it to access request traffic. And pass them to the specified service, as in:

spec:
  backend:
    serviceName: my-svc
    servicePort: 80
Copy the code
Traffic is distributed based on URL paths

Ingress also supports traffic distribution based on URL paths, for example:

spec:
  rules:
  - host: www.ilinux.io
    http:
      paths:
        - path: /web
          backend:
            service:
              name: myapp-svc-web
              port: 
                number: 80
        - path: /api
          backend:
            service:
              name: myapp-svc-api
              port: 
                number: 80
Copy the code

The rule defined above forwards requests to www.ilinux.io/web to the front end service myapp-svc-web and www.ilinux.io/api to the back end service myapp-svc-API.

Virtual host based on host name

If the service is divided by domain name, the Ingress can be defined based on virtual host as follows:

spec:
  rules:
  - host: web.ilinux.io
    ...
  - host: api.ilinux.io
Copy the code
Ingress resource of TLS type

If you need to publish Service resources in HTTPS, you can also configure Ingress resources for TLS, but based on a Secret object containing the private key and certificate, which will be covered in a later section. By referencing this Secret in the Ingress resource, the Ingress controller loads and configures the HTTPS service:

spec:
  tls:
  - secretName: ilinuxSecret
  backend:
    ...
Copy the code

Ingress controller

The Ingress controller itself is a container application running in a Pod, typically a proxy and load-balancing daemon such as Nginx or Envoy that monitors the Ingress object state from the API Server. It generates configuration files in application-specific formats based on its rules and makes the new configuration take effect by reloading or restarting the daemon. For example, for Nginx, Ingress rules need to be translated into Nginx configuration information. Since the Ingress controller is actually a Pod resource, it also needs to access external traffic. This can be done using a NodePort or LoadBalancer Service object to access request traffic from outside the cluster. Or with the help of DaemonSet controller, Pod resources of Ingress controller each run on all or part of the working nodes of the cluster as a single instance, and configure such Pod objects to access external traffic on the current node in the form of hostPort or hostNetwork. For example, to deploy ingress-nginx and access traffic using a dedicated Service, first apply mandatory.yaml to deploy all resources required by ingress-nginx. Their namespace is ingress-nginx:

Kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yamlCopy the code

Nginx-ingress-controller then defines a dedicated NodePort Service resource with the same selector as nginx-Ingress-controller:

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  selector:
    app.kubernetes.io/name: ingress-nginx
  ports:
  - port: 80
    name: http   
    nodePort: 30080
  - port: 443
    name: https   
    nodePort: 30443
Copy the code

It is then accessible through NodePort Serviced IP:30002. Alternatively, you can configure /etc/hosts based on the host name specified by the ingress, such as for the following ingress configuration:

spec:
  rules:
  - host: tomcat.ilinux.io
Copy the code

Sudo vim /etc/hosts, add a line

127.0.0.1 tomcat. Ilinux. IOCopy the code

It can then be accessed via tomcat.ilinux. IO :30002.

Learning materials

Kubernetes combat advanced ma Yongliang