Moment For Technology

Centos 7.6 Set up a K8S cluster

Posted on Aug. 8, 2022, 2:27 p.m. by Devansh Subramaniam
Category: The back-end Tag: kubernetes

1 Server Configuration

Three servers are used. Configure hostname, fixed IP address, hosts, firewall, selinux, and Swap in sequence.

The node type IP HOSTNAME

1.1 configure the HOSTNAME

  • masterServer:hostnamectl --static set-hostname
  • node-1Server:hostnamectl --static set-hostname
  • node-2Server:hostnamectl --static set-hostname

1.2 Configuring fixed IP Addresses

Edit /etc/sysconfig/network-scripts/ifcfg-ens192, change BOOTPROTO to static, and append to /etc/sysconfig/network-scripts/ifcfg-ens192:

Copy the code

Configure fixed IP addresses for the three servers. After the configuration, restart the servers.

Run the IP addr and ping commands to check whether the fixed IP address is configured successfully.

1.3 configure HOSTS

Configure a host for each server so that each server can access each other using a domain name. Execute on each server:

cat  EOF  /etc/hosts
Copy the code

After the configuration, run the ping k8smaster.geoscene. CD command to check whether the configuration is successful.

1.4 Configuring the Firewall

Multiple ports need to be enabled in the K8S cluster. Because the k8S cluster is deployed on the Intranet, disable the firewall on the server

systemctl stop firewalld
systemctl disable firewalld
Copy the code

1.5 configuration selinux

SELinux, security Enhanced Linux, is a Linux system that uses a security architecture. Since we are not professional Linux oM engineers, and to prevent future deployment problems due to SELinux, we turn SELinux off.

  • To viewSELinuxWhether to enable: Run the command/usr/sbin/sestatus -vTo see theSELinux statusIs the state ofenabled
  • Shut downSELinux: Edit a filevi /etc/selinux/configTo set upSELINUXfordisabled

1.6 configure Swap

The idea behind K8S is to wrap instances as tightly as possible to 100 percent, and all deployments should be tied to CPU and memory limits. So, if the scheduler sends a Pod to a machine, it should not use swapping. Meanwhile, k8S designers turned swap off for performance reasons. If you are considering resource-saving resources when running a large number of containers, add the kubelet parameter --fail-swap-on=false to solve this problem.

Steps to close Swap:

  • swapoff -a
  • The editor/etc/fstabComments,swapline

Deploy the container runtime

The container runtime here refers to the software responsible for running the container, including Containerd, Docker, and Cri-O. We choose Docker as the container runtime.

To configure a K8S cluster, you need to install a container runtime for each node to provide a running environment for pods.

2.1 Configuring Cgroups for container progress

Cgroups is a mechanism provided by the Linux kernel to limit the resources used by a single process or multiple processes. It can achieve fine control over resources such as CPU and memory. Docker uses Cgroups to control CPU, memory and other resources.

If a Linux system distribution uses Systemd to initialize the system, the initialization process generates and uses a root control group that acts as the CGroup manager. At this point systemd and Cgroups are tightly integrated and each Systemd unit will be assigned a Cgroup. More on Cgroup can be found here.

When installing the container runtime and Kubelet, you can specify a Cgroups manager for it, but this configuration is not recommended on the K8S website and can become unstable under resource pressure.

The container runtime uses the same Cgroups manager as Kubelet and Systemd. A single manager simplifies the view of allocating resources and has more consistent management of available and used resources by default.

For the Docker container runtime, you can set the native. cgroupDriver =system option to do this.

2.2 installation Docker

Docker needs to be installed on all server nodes.

  1. Package required for Installation
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code
  1. newDockerwarehouse
sudo yum-config-manager --add-repo \
Copy the code
  1. The installationDocker CE
Sudo yum update -y  sudo yum install -y \ containerd. IO -1.2.13 \ docker-ce-19.03.11 \ docker-ce-cli-19.03.11Copy the code
  1. configurationDocker Daemon
Create /etc/docker directory
sudo mkdir /etc/docker

Set up the Docker Daemon
cat EOF | sudo tee /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF
Copy the code
  1. configurationDockerPowered up
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
Copy the code

3 Install kubeadm, kubelet, and kubectl

3.1 Checking the Uniqueness of the MAC Address and Product_uUID of a Node

  • Through the commandifconfig -aView networkMACAddress, whereetherThe one in the back is for the network cardMACaddress
$ ifconfig -a
Copy the code
docker0: Flags =4099UP,BROADCAST,MULTICAST MTU 1500 inet netmask BROADCAST ether 02:42:b1: EA: DC: 9B TXQueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0b) RX errors 0 Dropped 0 Overruns 0 frame 0 TX Packets 0 bytes 0 (0.0b) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ENS192: Flags = 4163  UP, BROADCAST, RUNNING, MULTICAST  mtu 1500 inet netmask BROADCAST inet6  fe80::a3b1:f5:ed45:867d prefixlen 64 scopeid 0x20link inet6 fe80::91b6:36c5:d7ee:7ae9 prefixlen 64 scopeid 0x20link  inet6 fe80::60d1:71b4:d966:e79d prefixlen 64 scopeid 0x20link ether 00:50:56:8c:00:10 txqueuelen 1000 (Ethernet) RX Packets 193219 bytes 376026738 (358.6 MiB) RX errors 0 Dropped 0 Overruns 0 frame 0 TX packets 179064 bytes 15122260 (14.4 MiB) TX errors 0 Dropped 0 Overruns 0 carrier 0 collisions 0 LO: Flags =73UP,LOOPBACK,RUNNING MTU 65536 inet netmask inet6 ::1 prefixLen 128 scopeid 0x10host loop Txqueuelen 1000 (Local Loopback) RX packets 64 bytes 5568 (5.4 KiB) RX errors 0 Dropped 0 Overruns 0 frame 0 TX packets 64 bytes 5568 (5.4 KiB) TX errors 0 Dropped 0 Overruns 0 carrier 0 collisions 0Copy the code
  • Using the commandcat /sys/class/dmi/id/product_uuidrightproduct_uuidcheck

Normally, both the network MAC address and product_uUID of the hardware device have unique values, but duplicate values may exist in the virtual machine. If these two configurations are not unique, you may fail to install kubeadm.

3.2 Allow Iptables to check bridge traffic

Using the lsmod command | grep br_netfilter to check whether br_netfilter module is loaded. If the module is not loaded, run sudo modprobe br_netfilter to load the module.

Set net.bridge. Bridge -nf-call-iptables in sysctl configuration to 1

$ cat EOF | sudo tee /etc/modules-load.d/k8s.conf
$ cat EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
$ sudo sysctl --system
Copy the code

3.3 Installing Kubeadm, kubelet, and kubectl

Needs to be installed on each server:

  • kubeadm, which is used to initialize the cluster
  • kubeletIs used for startup on each node in the clusterPodAnd the container etc.
  • kubectlCommand line tool to communicate with the cluster

Kubeadmin, Kubelet, and Kubectl need to be installed separately, so make sure their versions match. For deviations from the version, please refer to:

$ cat  /etc/yum.repos.d/kubernetes.repo  EOF [kubernetes] name=Kubernetes baseurl= enabled=1  gpgcheck=0 repo_gpgcheck=0 gpgkey= EOF

# set SELinux to permissive mode (disable it)
$ setenforce 0
$ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

$ yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

$ systemctl enable --now kubelet
Copy the code

3.4 Configuring the Cgroup manager used by Kublet on the control plane node (primary node

If Docker is used as a container, kubeadm will automatically monitor the cgroup driver for it and configure the /var/lib/kubelet/kubeadm-flags.env file at runtime.

If you use cri-O as a container, you need to pass the cgroupDriver value for kubeadm init.

Skip this step when we use Docker as a container.

4 Configure the master node

  1. ink8smaster.geoscene.cdExecute in server:
$kubeadm init \ - apiserver - advertise - address = \ - image - repository 12 \ \ - service - cidr = / - pod - network - cidr = the code

Among them,

  • --apiserver-advertise-address.APIWhat the server announces that it is listening toIPAddress, if not set, the default network interface will be used
  • --image-repository, pull image container warehouse
  • --service-cidr, for the virtual serviceIPThe address is specified separatelyIPAddress segment, default is ""
  • --pod-network-cidrTo specify the IP address segment that can be used by the POD network. If this parameter is set, the control plane is automatically assigned to each nodeCIDRs
  • Other initialization options are available here

After the command is successfully executed, record the last line of the returned information. Kubeadm join --token **** --discovery-token-ca-cert-hash Sha256: dd74bd1b52313dd8664b8147cb6d18a6f8b25c6c5aa4debc3, this command is used to use in the work nodes, add it to the master node.

  1. Set permissions for users
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code
  1. The installationPodThe network plugin
$ kubectl apply -f
Copy the code

5 Configure the node

  1. innodeExecute the command in the node, willnodeNode added tomasterA node
$kubeadm join --token **** --discovery-token-ca-cert-hash sha256:dd74bd1b52313dd8664b8147cb6d18a6f8b25c6c5aa4debc3Copy the code
  1. inmasterThe command is used in the nodekubectl get nodesCheck the loading status of the newly added nodenodeThe node'sstatusforNOT Ready, it takes about 7 to 8 minutes to becomeReady
$kubectl get Nodes NAME STATUS ROLES AGE VERSION k8smaster.geoscene. CD Ready Control-plane,master 93m v1.20.4 K8snode1. geoscene. CD Ready none 88m v1.20.4k8snode2. geoscene. CD Ready none 88m v1.20.4Copy the code

6 deployment Dashboard

Dashboard is a web-based K8S user interface that displays resource status information and all error messages in the K8S cluster. When deploying the K8S cluster, Dashboard is not installed by default. You need to install Dashboard separately.

Installing Dashboard is also simple, with the following command:

$kubectl apply -f the code

Dashboard is accessed in API Server mode by default, and it is very troublesome to access, accessing a long string of urls. You can change the access mode to NodePort, which can be accessed through IP:PORT.

Kubectl --namespace=kubernetes-dashboard edit service kubernetes-dashboard ClusterIP changed to type: NodePort.

Run the kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard command to view the external dashboard mapping interface.

$ kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Kubernetes-dashboard NodePort  None  443:32730/TCP 92mCopy the code

In this way, you can access the Dashboard on the Intranet by accessing

7 Create an administrator account

When accessing Dashboard by default, you will be prompted to enter Token or provide a Kubeconfig file to log in. You need to create an administrator account for the cluster and generate a Token to log in to the Dashboard.

$ kubectl create serviceaccount dashboard-admin -n kube-system
$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Copy the code

You can record the token information in the output or run the last command before each login to obtain the token.

About (Moment For Technology) is a global community with thousands techies from across the global hang out!Passionate technologists, be it gadget freaks, tech enthusiasts, coders, technopreneurs, or CIOs, you would find them all here.