1. Install metrics-server to collect resource usage

To obtain the CPU and memory resources in the Kubernetes cluster, the previous Kubernetes cluster is obtained by Heapster. In this paper, the Kubernetes cluster is version 1.18, and resources are collected by installing metrics-server. With metrics-Server, we can monitor CPU and memory usage in the Kubernetes cluster and lay the foundation for our automatic cluster scaling capacity technology hPA. Download address:

Github.com/kubernetes-…

Extract:

The tar XVZF metrics - server - 0.3.6. Tar. GzCopy the code

Metrics-server-0.3.6 /deploy/1.8+

[root@kubernetes-master01 1.8+]# ls-lh-rw-rw-r --. 1 root root 397 July 24 2020 aggregated-metrics-reader.yaml -rw-rw-r--. 1 root root root 303 Jul 24 2020 auth-delegator.yaml -rw-rw-r--. 1 root root 324 Jul 24 2020 auth-reader.yaml -rw-rw-r--. 1 root root 298 July 24 2020 metrics-apiservice. yaml-rw-rw-r --. 1 root root 1.3k 3月 15 21:22 Yaml-rw-rw-r --. 1 root root 297 July 24 2020 Metrics-server-service. yaml-rw-rw-r --. 1 root Root 532 7月 24 2020 resource-reader.yamlCopy the code

Before installation, the metrics-server-deployment.yaml file needs to be modified in a few places:

  • Image: k8s.gcr. IO /metrics-server/metrics-server:v0.3.6 Rancher /metrics-server:v0.3.6, the image could not be downloaded due to firewall problems, so the metrics-server:v0.3.6 image in Rancher was used directly.
  • Add two parameters:
    • – kubelet – preferred – address – types = InternalIP, Hostname: The metrics-server container does not use CoreDNS to resolve the host name of each Node. By default, metrics-server connects to the Node using the host name of the Node.
    • –kubelet-insecure- TLS: kubelet-insecure- TLS: kubelet-insecure- TLS: kubelet-insecure- TLS: kubelet-insecure- TLS: kubelet-insecure- TLS: kubelet-insecure- TLS: kubelet-insecure- TLS: kubelet-insecure- TLS: kubelet-insecure
args:
  - --kubelet-preferred-address-types=InternalIP,Hostname
  - --kubelet-insecure-tls
Copy the code

If so, run the following command to install metrics-server.

Kubectl apply -f metrics - server - 0.3.6 / deploy / + / 1.8Copy the code

Installation results:

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
Copy the code

Check whether the metrics-server is installed successfully. The following output shows that the metrics-server is running properly.

[root@kubernetes-master01 1.8+]# kubectl get deployment-n kube-system NAME READY up-to-date AVAILABLE AGE CoreDNS 2/2  2 35h metrics-server 1/1 1 1 54mCopy the code

The CPU usage of kubernetes-Master01 is 137m (137 cpus divided by 1000 cpus). The CPU usage of kubernetes-Master01 is 1000m (137 cpus divided by 1000 cpus).

[root@kubernetes-master01 ~]# kubectl top node
NAME                  CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
kubernetes-master01   137m         6%     1165Mi          67%
kubernetes-worker01   39m          1%     690Mi           40%
kubernetes-worker02   37m          1%     721Mi           41%
Copy the code
[root@kubernetes-master01 ~]# kubectl top pod
NAME             CPU(cores)   MEMORY(bytes)
pod-busy-nginx   0m           1Mi
Copy the code

In addition to the command above to view resource usage directly, we can also use the metrics-serverAPI service:

  • Run kubectl proxy –port=8080 to enable the proxy port.

    [root@kubernetes-master01 ~]# kubectl proxy --port=8080
    Starting to serve on 127.0.0.1:8080
    Copy the code
  • To open a same SSH connection, execute the command curl localhost: 8080 / apis/metrics. K8s. IO/v1beta1 /, as the chart, can see the content of the API return:

    [root@kubernetes-master01 ~]# curl localhost:8080/apis/metrics.k8s.io/v1beta1/
    {
      "kind": "APIResourceList",
      "apiVersion": "v1",
      "groupVersion": "metrics.k8s.io/v1beta1",
      "resources": [
        {
          "name": "nodes",
          "singularName": "",
          "namespaced": false,
          "kind": "NodeMetrics",
          "verbs": [
            "get",
            "list"
          ]
        },
        {
          "name": "pods",
          "singularName": "",
          "namespaced": true,
          "kind": "PodMetrics",
          "verbs": [
            "get",
            "list"
          ]
        }
      ]
    }
    Copy the code
  • To view resource usage of a specific Pod, run the following command to specify a Pod and namespace:

    [root@kubernetes-master01 ~]# curl localhost:8080/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/pod-busy-nginx
    {
      "kind": "PodMetrics",
      "apiVersion": "metrics.k8s.io/v1beta1",
      "metadata": {
        "name": "pod-busy-nginx",
        "namespace": "default",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/pod-busy-nginx",
        "creationTimestamp": "2021-03-16T03:28:49Z"
      },
      "timestamp": "2021-03-16T03:28:21Z",
      "window": "30s",
      "containers": [
        {
          "name": "busybox",
          "usage": {
            "cpu": "0",
            "memory": "32Ki"
          }
        },
        {
          "name": "nginx",
          "usage": {
            "cpu": "0",
            "memory": "1440Ki"
          }
        }
      ]
    }
    Copy the code

Second, Pod resource quota practice

If a service is deployed on a VM, the resources of the VM are isolated from those of a physical machine. In other words, a service deployed on a VM can use only the resources limited by the VM. After entering the Kubernetes world, the corresponding microservice application of the reader in production will be deployed as Pod. Pod in Kubernetes is equivalent to the existence of virtual machine, of course, also need to limit its resources, so as to ensure the stable operation of the cluster.

The main resource constraints fall into two categories, CPU and memory.

Resource requirements are classified into default resources and maximum limits:

  • Request: the most basic resources required by the predefined container. During Pod scheduling to Node, ensure that the Node has the resources required by request defined in Pod; otherwise, Pod will not be scheduled to the corresponding Node.
  • Limit: The maximum number of resources that can be allocated to a Pod. If a Pod needs more resources than this limit, it may be disabled or deleted.

Vim pod-centos.yaml:

apiVersion: v1
kind: Pod
metadata:
   name: centos
spec:
   containers:
   - name: centos
     image: centos
     command: ["/bin/sh"]
     args: ["-c", "i=0; while true; do i=i+1; done"]
Copy the code

Create Pod with YAMl:

[root@kubernetes-master01 k8s-yaml]# kubectl apply -f pod-centos.yaml
pod/centos created
Copy the code

Check resource usage, CPU is rising because Pod has a while loop script running all the time.

[root@kubernetes-master01 k8s-yaml]# kubectl top pod
NAME           CPU(cores)   MEMORY(bytes)
centos         1000m        0Mi
Copy the code

We added a resource restriction to Pod:

apiVersion: v1 kind: Pod metadata: name: centos-limit spec: containers: - name: centos image: centos command: ["/bin/sh"] args: ["-c", "I =0; while true; do I = I +1; done"] Resources: requests: CPU: 0.2 memory: 50Mi limits: CPU: 0.6 the memory: 800 miCopy the code

Verify that the CPU is within the limit limit by performing the following steps, as shown below. The CPU does not rise after 600 MB.

[root@kubernetes-master01 k8s-yaml]# kubectl apply -f pod-centos-limit.yaml
pod/centos-limit created
[root@kubernetes-master01 k8s-yaml]# kubectl get pod
NAME           READY   STATUS    RESTARTS   AGE
centos-limit   1/1     Running   0          7s
[root@kubernetes-master01 k8s-yaml]# kubectl top pod
NAME           CPU(cores)   MEMORY(bytes)
centos-limit   600m         0Mi
Copy the code

The implication of requests is that the Pod has at least as many resources to allocate, although it is possible that the Pod may not use that many resources, and requests are the minimum guarantee. We then run a Pod centos-request for centos that just runs the sleep command to watch its resource usage.

The content of the pod-centos-request.yaml file is as follows:

apiVersion: v1
kind: Pod
metadata:
   name: centos-request
spec:
   containers:
   - name: centos
     image: centos
     command: ["/bin/sh"]
     args: ["-c", "echo Hello; sleep 3600;"]
     resources:
       requests:
         cpu: 0.2
         memory: 50Mi
       limits:
         cpu: 0.6
         memory: 800Mi
Copy the code

The specific operation steps are as follows:

[root@kubernetes-master01 k8s-yaml]# kubectl apply -f pod-centos-request.yaml
pod/centos-request created
[root@kubernetes-master01 k8s-yaml]# kubectl get pod
NAME             READY   STATUS    RESTARTS   AGE
centos-request   1/1     Running   0          21m
[root@kubernetes-master01 k8s-yaml]# kubectl top pod
NAME             CPU(cores)   MEMORY(bytes)
centos-request   0m           0Mi
Copy the code

After verification, as shown above, although the CPU in Requests was 0.2, the ACTUAL CPU usage for Pod centos-Request was 0.

Three,

If you are interested in learning more about Kubernetes, you can read this column series: The Beautiful Kubernetes Column