A summary of kubelet

1.1 kubelet role

In a Kubernetes cluster, a Kubelet service process is started on each Node (also known as Minion). This process is used to process tasks that the Master sends to the node and manage pods and containers in pods. Each Kubelet process registers node information on API Server, reports node resource usage to Master periodically, and monitors container and node resources through cAdvisor.

Two-node Management

The node decides whether to register itself with the API Server by setting the kubelet startup parameter –register-node. If the value of this parameter is true, kubelet will try to register itself with the API Server. When self-registered, Kubelet also starts with the following parameters.

  • — Api-Servers: The location of the API Server.
  • –kubeconfig: kubeconfig file used to access the SECURITY profile of the API Server.
  • Cloud-provider: address of cloud service providers (IaaS). It is only used in public cloud environments.

Tip: It is usually possible that each kubelet is granted permission to create and modify any node. In a production environment, it is recommended that Kubelet’s permissions be limited to allowing it to modify and create only the permissions of the nodes on which it resides.

If the cluster is running out of resources, users can easily expand by adding machines and using Kubelet’s self-registration mode.

In some cases, some kubelets in the Kubernetes cluster do not choose self-registration mode. The user needs to configure the resource information of the Node and inform the location of the Kubelet API Server on the Node. The Node information needs to be manually created and modified.

If you need to manually create node information, you can turn off self-registration mode by setting kubelet’s startup parameter “– registerNode =false”.

Kubelet registers node information through THE API Server at startup and periodically sends new messages of the node to the API Server. Upon receiving the information, the API Server writes the information to etCD. Kubelet startup parameter “–node-status-update-frequency” is used to set how often Kubelet reports node status to API Server. The default value is 10 seconds.

Three Pod management

Kubelet gets a list of pods to run on its Node in the following ways.

  1. File: kubelet launch parameters “– config” specified in the configuration file directory (the default directory for the/etc/kubernetes manifests /). Run the –file-checkfrequency command to set the interval for checking the file directory. The default interval is 20s.
  2. HTTP endpoint (URL) : Set with the –manifest-url parameter. Run the –http-check-frequency command to set the interval for checking the HTTP endpoint data. The default interval is 20s.
  3. API Server: Kubelet listens to the ETCD directory through API Server and synchronizes the Pod list.

\

All pods created in a non-API Server manner are called Static pods. Kubelet reports the status of a Static Pod to the API Server, which creates a Mirror for the Pod to match. The status of the Mirror Pod truly reflects the status of the Static Pod. When a Static Pod is deleted, its Mirror Pod is also deleted.

To get a Pod List from API Server, Kubelet uses the API Server Client’s Watch plus List to listen for the name of the current node and the directory/Registry/Pods. Synchronize the obtained information to the local cache.

Kubelet listens on etCD. All operations against Pod are listened on by Kubelet. If a new Pod is found bound to this node, it is created as described in the Pod manifest. If a local Pod is found to have been modified, Kubelet will make corresponding changes, such as deleting a container in the Pod through the Docker Client.

If it is found that the Pod of this node is deleted, the corresponding Pod will be deleted and the container in Pod will be deleted through Docker Client.

Kubelet reads the information it is listening to, and does the following if it is creating or modifying a Pod task:

  1. Create a data directory for this Pod.
  2. Read the Pod manifest from API Server.
  3. Mount the ExternalVolume to this Pod.
  4. Download Secret for Pod.
  5. Check that the Pod is already running on the node. If the Pod does not have a container or the Pause container is not started, stop the process of all containers in the Pod first. If there are containers in the Pod that need to be deleted, delete them.
  6. Create a container for each Pod with the “Kubernetes/Pause” image. The Pause container is used to take over the network of all other containers in the Pod. Each time a new Pod is created, Kubelet creates a Pause container first, and then other containers. The “Kubernetes/Pause” image is about 200KB, which is a very small container image.
  7. Do the following for each container in Pod:
    • Calculates a Hash value for the container, and then queries the Hash value of the Docker container using the container name. If the Hash value of the Docker container is different from that of the Docker container, stop the Docker process and the associated Pause process. If they are the same, no processing is done.
    • If the container is terminated and there is no restartPolicy specified by the container, nothing is done.
    • Call Docker Client to download the container image, call Docker Client to run the container.

Four containers health check

4.1 Health check Method

Pod checks the container’s health status with two types of probes,LivenessProbe and ReadinessProbe.

4.2 LivenessProbe probe

LivenessProbe, used to determine whether the container is healthy and feedback to Kubelet. If LivenessProbe detects that the container is unhealthy, Kubelet will delete the container and act accordingly according to the container’s restart policy. If a container does not contain a LivenessProbe, kubelet assumes that the value returned by the container’s LivenessProbe is always Success.

Kubelet periodically calls the LivenessProbe probe in the container to diagnose the health of the container. LivenessProbe contains the following three implementations:

  1. ExecAction: Execute a command inside the container. If the exit status code of this command is 0, the container is healthy.
  2. TCPSocketAction: Performs TCP checks based on the IP address and port number of the container. If the port can be accessed, the container is healthy.
  3. HTTPGetAction: Calls the HTTPGet method using the container’s IP address, port number, and path. If the status code of the response is greater than or equal to 200 and less than or equal to 400, the container is considered healthy.

\

LivenessProbe is included in spec.containers.{a container} defined by Pod.

Example 1: HTTP check mode

[root@k8smaster01 study]# vi myweb-liveness.yaml

1 apiVersion: v1 2 kind: Pod 3 metadata: 4 labels: 5 test: liveness 6 name: myweb 7 spec: 8 containers: 9 - name: myweb 10 image: kubeguide/tomcat-app:v1 11 ports: 12 - containerPort: 8080 13 livenessProbe: 14 httpGet: 15 path: /index.html 16 port: 8080 17 httpHeaders: 18 - name: X-Custom-Header 19 value: Awesome 20 initialDelaySeconds: 5 21 timeoutSeconds: 1 22 #kubelet sends an HTTP request to the localhost, port, and specified path to check the health of the container.Copy the code

Example 2: Run a specific command.

[root@k8smaster01 study]# vi myweb-liveness.yaml

1 apiVersion: v1 2 kind: Pod 3 metadata: 4 labels: 5 test: liveness 6 name: myweb 7 spec: 8 containers: 9 - name: myweb 10 image: kubeguide/tomcat-app:v1 11 ports: 12 - containerPort: 8080 13 livenessProbe: 14 exec: 15 command: 16 - cat 17 - /tmp/health 18 initialDelaySeconds: 5 19 timeoutSeconds: 1 20 #kubelet Run cat/TMP /health in the container. If the value returned by this command is 0, it indicates that the container is healthy. Otherwise, it indicates that the container is unhealthy.Copy the code

4.3 ReadinessProbe probe

The other is the ReadinessProbe, which determines whether the container is started and ready to receive requests. If the ReadinessProbe detects a container startup failure, the Pod state will be changed, and the Endpoint Controller will remove the Endpoint entry from the Service Endpoint that contains the IP address of the container’s Pod.

5 cAdvisor Resource monitoring

5.1 cAdvisor overview

In a Kubernetes cluster, information during the application life cycle can be monitored at different levels, such as containers, PODS, Services, and the entire cluster.

Kubernetes provides users with as much detailed information as possible about resource usage at all levels to gain insight into application execution and identify potential bottlenecks in the application.

CAdvisor is an open source proxy tool that analyzes container resource utilization and performance characteristics. It was created because of containers and therefore supports Docker containers. In the Kubernetes project, cAdvisor is integrated into Kubernetes code, and Kubelet uses cAdvisor to get data for its nodes and containers

5.2 Principles and functions of cAdvisor

CAdvisor automatically looks up all containers on its Node and automatically collects statistics on CPU, memory, file system, and network usage. Typically, the cAdvisor exposes a simple UI through port 4194 of the Node on which it resides.

Kubelet acts as a bridge between the Kubernetes Master and nodes, managing pods and containers running on nodes. Kubelet transforms each Pod into its member containers, obtains individual container usage statistics from cAdvisor, and then exposes these aggregated Pod resource usage statistics through the REST API.

CAdvisor can only provide 2 ~ 3min of monitoring data and does not persist performance data. Therefore, in the early version of Kubernetes, Heapster is needed to realize the collection and query function of all container performance indicators in the cluster.

Starting with Kubernets 1.8, the performance Metrics data query interface is upgraded to the standard Metrics API, and the back-end service is upgraded to the new Metrics Server. As a result, THE UI and API services provided by cAdvisor on port 4194 were deprecated starting with Kubernets version 1.10 and completely shut down in version 1.12.

If you need to restart the service, you can manually deploy a DaemonSet to start a cAdvisor on each Node to provide UI and API. See github.com/google/cadv…

In the new Kubernetes monitoring architecture, Metrics Server is used to provide CoreMetrics, including CPU and memory usage data for Node and Pod. Other CustomMetrics (CustomMetrics) are collected and stored by third-party components such as Prometheus.