Antecedents feed

Last year, I used Raspberry PI to build a K8S cluster. At that time, I didn’t know much about this, so I just built a simple cluster, installed a network plug-in, only a simple dashboard, and didn’t deploy any actual applications. Link of previous article: Hand by hand to teach everyone to build A K8S cluster using Raspberry PI 4B

Recently, on a whim, I made a new case out of blocks to test deploy an application. Today summarize all the recent experience of the process and problems for your reference

Pre-problem processing

Replacing a Network Plug-in

In my last article, I wrote about the installation of the network plug-in Colico. Recently, I found that the pod of colico plug-in failed to start. After trying for a long time, I had to reinstall it.

Deleting a Network Plug-in

Remove the network plug-in first

kubectl delete -f calico.yaml
Copy the code

There is a tunl0 virtual network adapter residual, you can use ifconfig to view, uninstall it, because I tried many times, put a lot of shell commands combined into a command, can be modified according to their actual situation:

ifconfig tunl0 down; ip link delete tunl0; rm -f /etc/cni/net.d/*; kubectl delete -f calico.yaml; systemctl start kubelet; systemctl start dockerCopy the code

Cluster reset

Execute cluster initialization commands on 3 machines:

kubeadm reset
Copy the code

Delete configuration files on three machines:

rm -rf $HOME/.kube; rm -rf /etc/cni/net.dCopy the code

Restart Docker and Kubelet, firewall rules cleared:

systemctl daemon-reload; systemctl stop kubelet; systemctl stop docker; iptables --flush; iptables -tnat --flush; systemctl start kubelet; systemctl start dockerCopy the code

Cluster installation

As in the previous article, the Master node installation is not described in detail here

Sudo kubeadm init - image-repository=registry.aliyuncs.com/google_containers kubernetes - version = v1.20.0 - apiserver - advertise - address = 192.168.2.181 - pod - network - cidr = 192.168.0.0/16 - ignore - preflight - errors = allCopy the code

Node Adds the Node to the command

Kubeadm join 192.168.2.181:6443 --token jqll23.kc3nkji7vxkaefro --discovery-token-ca-cert-hash sha256:1b475725b680ed8111197eb8bfbfb69116b38a8d2960d51d17af69188b6badc2 --ignore-preflight-errors=allCopy the code

View all Node commands:

kubectl get pods --all-namespaces
Copy the code

The connection to The server localhost:8080 was refused -did you specify The right host or port? Solution: Cause: Kubernetes master is not bound with the local machine, cluster initialization is not bound, at this time set in the local environment variable can solve the problem.

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile

source /etc/profile
Copy the code

You can solve this problem completely by putting source /etc/profile in a script that is automatically executed on startup

Pod running error, view log

I have tried for many times, but pod does not run successfully after network plug-in is installed. You can use the following command to check the log reason:

kubectl logs -f test-k8s-68bb74d654-9wwbt -n kube-system
Copy the code

Test-k8s-68bb74d654-9wwbt is the specific POD name

Installing network Plug-ins

Official YAML files are used

The curl - sSL https://raw.githubusercontent.com/coreos/flannel/v0.12.0/Documentation/kube-flannel.yml | kubectl apply - f - *Copy the code

Failed to connect to 10.1244.***.

View all pods

kubectl get pods --all-namespaces
Copy the code

Now that the first application has been installed, the application with pod name test-k8s is the one I installed. Its namespace is default, which is different from the others.

Viewing all Nodes

kubectl get node --all-namespaces
Copy the code

Same order as the last one, changed the place

Install the first application

To make the mirror

Install the application should write a YAML file, and a usable image, I refer to the B station guangzhou Yunke video tutorial, but his test application is based on the X86 platform, the image directly run error is as follows:

Test -k8s to clone all the code on the raspberry PI machine:

To get a mirror warehouse to provide cluster pull, and then I test the use of ali Cloud container mirror service, to set up an open warehouse, allowing all people to pull the image

Docker build command is used to package the image (now in order to push the test again, I delete all the images and containers) mainly according to alicloud tutorial, as follows:

Package & push to Aliyun

First, pack the image locally. The image name is K8S. -t is short for tag. Multiple tags can be set in the build

docker build -t test-k8s .
Copy the code

Image a new tag

docker tag test-k8s:latest registry.cn-shenzhen.aliyuncs.com/koala9527/testapp:latest
Copy the code

push

docker push registry.cn-shenzhen.aliyuncs.com/koala9527/testapp:latest
Copy the code

Until now the mirror in the ali cloud container mirror service, mirror address is: registry.cn-shenzhen.aliyuncs.com/koala9527/testapp:latest

Write the yamL file for the first application

The file name testapp. Yaml

ApiVersion: apps/v1 kind: Deployment metadata: # name: test-k8s spec: replicas: Selector: matchLabels: app: test-k8s selector: matchLabels: app: test-k8s template: metadata: labels: app: Test-k8s spec: # Define containers, which can be multiple containers: - name: test-k8s # Registry.cn-shenzhen.aliyuncs.com/koala9527/testapp:v1 # image resources: requests: CPU, 100 MBCopy the code

In k8S, functionality is iterated at a high speed. Different resource controllers use different VERSIONS of the API, and different versions of the cluster support different apis. The written YAML file needs to match the real cluster environment, and the resource controller type is specified by the KIND field.

You can usekubectl api-versionsView the API version of the cluster

Kind: controller type. All resources in the cluster are highly abstracted by K8S. Kind indicates the type of these resources and Deployment is a stateless resource object that defines a multiple copy. name: Test-k8s: specifies the name of the resource controller as test-k8s replicas: initializes the number of specified PODS. MatchLabels: specifies the selector label. To specify this resource with other resource controls, use the tag value template. Below is the data that contains the pod, app: Test-k8s specifies pod name, imagePod image pull address, and requested CPU resources. 0.1m = 0.1 CPU resources.

The deployment of application

kubectl apply -f testapp.yaml
Copy the code

Now you have three podsYou can use kubectl get pod -o wideTo view pod details, look at IP:

 kubectl get pod -o wide
Copy the code

Log in to one pod and try to access the other (here enter the first pod and access the second pod) :

Kubectl exec -it test-k8s-68b9f5c6c7-hn25x -- bash curl 10.244.1.173:8080Copy the code

The effect is as follows:You can see that the correct pod name is printed

A pod can be a single physical machine, sharing a network, and creating a new resource for access outside the cluster.

Create a Service resource controller

Yaml files

Service features:

  • A Service associates a Pod with a label
  • The Servcie life cycle is not bound to Pod and will not change IP due to Pod damage
  • Load balancing is provided to automatically forward traffic to different pods
  • Provides access ports outside the cluster
  • The service name is accessible from within the cluster

All resources are described in yaml files. Write a YAML file that describes Servie called service.yaml

apiVersion: v1 kind: Service metadata: name: test-k8s spec: selector: app: test-k8s type: NodePort ports: - port: NodePort: 31000 for the test-k8s applicationCopy the code

NodePort = ClusterIp; ClusterIp = ClusterIp; ClusterIp = ClusterIp; ClusterIp = ClusterIp; ClusterIp = ClusterIp Only internal cluster access is allowed. Other resources are of the LoadBalance type, generally provided by cloud vendors.

Note also that NodePort exposes a fixed range of ports: 30,000-32767

Application Service:

Same as applying Deployment:

kubectl apply -f service.yaml
Copy the code

View Service resources in k8S

kubectl get svc
Copy the code

The test results

The internal IP of this machine is 192.168.2.187, and the port set just now is 31000

Dynamic expansion and contraction capacity

Install the resource indicator viewing tool

Before using dynamic scaling, you need to install a resource indicator obtaining tool, which is used to monitor the CPU usage and running memory usage of cluster Node and Pod resources. The tool is named metrics-server. This tool is not installed by default.

wget <https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml>
kubectl apply -f components.yaml
Copy the code

There was an error later, the related POD could not be started, a paragraph of content in yamL file should be replaced as follows, the specific reason is not clear:

You can use it in the next videotopCommand to check the CPU and running memory usage of pod and node

Install horizontal automatic scaling service

The dynamic scaling of pods is controlled by another resource controller called HorizontalPodAutoscaler, which literally stands for horizontal automatic scaling and is as simple as service.yaml, named hpa.yaml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  namespace: default
  name: test-k8s-scaler
  labels:
    app: test-k8s-scaler
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: test-k8s
  minReplicas: 2
  maxReplicas: 100
  targetCPUUtilizationPercentage: 45

Copy the code

ScaleTargetRef in spAC specifies which resources to monitor. MinReplicas specifies the minimum number of copies. MaxReplicas specifies the number of copies. The definition of the minimum replications will cover the initial number of replicas, targetCPUUtilizationPercentage specified trigger the expansion index of CPU, k8s has a complex algorithm (” Kubernetes Action “in this book contains brief description). The usage of these POD resources will be observed at certain intervals for automatic adjustment. What I have learned here is that as long as the CPU usage of POD exceeds 45%, the capacity expansion policy will be triggered for expansion. Other indicators can also be specified for monitoring, but generally they are CPU and running memory.

Install the automatic scaling controller using kubectl apply-f hpa.yaml

You can also use Kubectl GET HPA to see the base state of the horizontal auto scaling controller, which is now 0%

Automatic expansion and shrinkage using AB pressure test

Download it directly on Windows, unzip it, and then go to the bin directory and run the following command:

. / ab. Exe - 10000 - c n 100 http://192.168.2.181:31000/Copy the code

This means a total of 1W requests, 100 threads simultaneously requesting and executingwatch kubectl get hpa,podReal-time monitoring of automatic scaling and pod number of detailed parameters

Wait a few minutes for the request to complete and the number of pods will go back to 2, at which point all the tests will be complete.

conclusion

The whole process is not complicated and there is no complex brain-burning feature. The horizontal expansion and shrinkage in K8s is the most attractive part for me, so I will try to realize this function in the first step after the successful deployment of the application. Next, I will combine Gitlab to carry out the automatic deployment of real CI/CD applications. Instead of typing the command manually, submit the code merge branch to trigger the deployment command, or try to install another application that can provide a real service and use it for its real purpose. Thank you for reading here.