1. Prepare the host environment

1.1 Host Planning

The Kubernetes cluster in this article is based on three hosts created by VirtualBox, with roles and configurations as follows

The host name The operating system CPU MEM role
master01 CentOS7.7 2 2GB master
worker01 CentOS7.7 2 2GB worker
worker02 CentOS7.7 2 2GB worker

Among them CentOS are minimal installation, about how to install Linux virtual machine here do not expand, online tutorials a lot.

The networks of the three hosts are in bridge mode, and static IP addresses are set later:

The host name IP
master01 192.168.0.114
worker01 192.168.0.115
worker02 192.168.0.116

There are many commands that need to be executed on three hosts at the same time. For efficiency purposes, I use iTerm2 for multiple terminals at the same time. Enter and exit shortcuts: Command + Shift + I

If you’re on Windows, Xshell has similar functionality.

1.2 Setting the host name

After the three hosts are installed, set the host name for each host:

#Execute on each host
[root@name ~]# hostnamectl set-hostname master01
Copy the code

1.3 Modifying Network Configurations

Run the /etc/sysconfig/network-scripts/ifcfg-enp0s3 command to configure a static IP address. Change the IP address based on the host IP address planning in Section 1.1.

All three hosts need to be modified

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="63e5deae-4f6c-4734-858c-38e2bf601c7f"
DEVICE="enp0s3"
ONBOOT="yes"
IPADDR="192.168.0.114"
PREFIX="24"
GATEWAY="192.168.0.1"
Copy the code

After the modification, restart the NIC:

[root@master01 ~]# systemctl restart network

#View the IP address after the restart
[root@master01 ~]# ip a
Copy the code

1.4 Host Name Resolution

Configure the mapping between IP addresses and hosts and add the following information to /etc/hosts of the three hosts:

192.168.0.114 master01
192.168.0.115 worker01
192.168.0.116 worker02
Copy the code

Ping the other two host names on each host to make sure they work:

[root@master01 ~]# ping worker01
[root@master01 ~]# ping worker02
Copy the code

1.5 Host Security Configuration

Perform this operation on all three hosts.

Close the firewalld
#Check whether it is open
[root@master01 ~]# systemctl status firewald
#Disabling the Firewall
[root@master01 ~]# systemctl stop firewalld
#Do not start automatically upon startup
[root@master01 ~]# systemctl disable firewalld
#Verify the status
[root@master01 ~]# firewall-cmd --state
not running
Copy the code
SELINUX set
#Check whether it is enabled.
[root@master01 ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31
#Or another way to check
[root@master01 ~]# getenforce
Enforcing

#Modify the configuration
[root@master01 ~]# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

#restart
[root@master01 ~]# reboot
Copy the code

1.6 Synchronizing host time

Perform this operation on all three hosts.

#Since it is a minimal installation system, you need to install NTPDate separately
[root@master01 ~]# yum install -y ntpdate

#Use Ali's clock source and synchronize every hour
[root@master01 ~]# crontab -e
#To open edit, write 0 */1 * * * ntpDate time1.aliyun.com
#validation
[root@master01 ~]# crontab -l
0 */1 * * * ntpdate time1.aliyun.com

#If it is the first time, you can actively synchronize the data[root@master01 ~]# ntpdate time1.aliyun.com 22 Feb 22:24:18 ntpdate[1659]: Adjust Time Server 203.107.6.88 offset -0.005327 SEC
#Verify that the time on multiple hosts is consistent[root@master01 ~]# Date, Saturday, February 22, 2020 22:24:18 CSTCopy the code

1.7 Disabling a Swap Partition

If kubeadm is used to deploy the cluster, the Swap partition must be disabled. Perform this operation on all three hosts.

#Comment out the last line of the swap configuration in /etc/fstab
[root@master01 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Feb 22 20:20:59 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=808954aa-4397-4557-a08c-f8a66be7774f /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap swap defaults 0 0

#Check the current status before restarting (see Swap information)
[root@master01 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        2047016       97400     1754632        8720      194984     1802148
Swap:       2097148           0     2097148

#restart
[root@worker01 ~]# reboot

#Check whether it takes effect (Swap is already 0)
[root@worker01 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        2047016       95452     1855096        8748       96468     1823464
Swap:             0           0           0

Copy the code

1.8 Configuring Host Bridge Filtering

Perform this operation on all three hosts.

1.8.1 Adding Bridge Filtering

Add bridge filter in Kubernetes cluster, in order to achieve filtering kernel.

#Add bridge filtering and address forwarding
[root@worker01 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> vm.swappiness = 0
> EOF

#Load the BR_netfilter module
[root@worker01 ~]# modprobe br_netfilter

#Check whether loading is performed.
[root@worker01 ~]# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter

#Load the bridge filtering configuration file
[root@worker01 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
Copy the code
Open IPVS 1.8.2

Because Kubernets use iptables or IPVS to use services, ipvS is generally more efficient than iptables in forwarding, so we can deploy IPVS directly here.

Install ipset and ipvsadm **

[root@master01 ~]# yum install -y ipset ipvsadm
Copy the code
#Add modules to load
[root@master01 ~]# cat > /etc/sysconfig/modules/ipvs.modules << EOF
> #! /bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF

#Authorize, run, and check whether it is loaded
[root@master01 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4      15053  0
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139224  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
Copy the code

Install Docker

Kubernetes uses Docker, a container management tool, to manage the cluster. Next we need to install Docker on all hosts.

2.1 Installation Dependencies

[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code

2.2 Setting the Mirror Source

Due to undescribable reasons, it is very slow to download Docker in China. The mirror source of Tsinghua is used here:

#Download the REPO file
[root@master01 ~]# wget -O docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo

#Replace the software warehouse address with TUNA
[root@master01 ~]# sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

# makecache
[root@master01 ~]# ssudo yum makecache fast

#View the Docker-CE version
[root@master01 ~]# yum list docker-ce.x86_64 --showduplicates | sort -r
Copy the code

2.3 installation docker

Here I use the version 18.063.ce-3.el7:

[root@master01 ~]# yum install -y docker-ce-18.06.3.ce-3.el7
Copy the code

Then it is to wait for the installation, due to the use of domestic sources, the installation is very fast.

2.4 Configuration and Verification

#Self-start upon startup
[root@master01 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
#Start the docker
[root@master01 ~]# systemctl start docker

#validation[root@master01 ~]# docker version Client: version: 18.06.3 -CE API version: 1.38 Go version: go1.10.3 Git commit: d7080c1 Built: Wed Feb 20 02:26:51 2019 OS/Arch: linux/amd64 Experimental: false Server: Engine: Version: 18.06.3- CE API Version: 1.38 (minimum version 1.12) Go version: go1.10.3 Git commit: D7080C1 Built: Wed Feb 20 02:28:17 2019 OS/Arch: linux/amd64 Experimental: falseCopy the code

2.5 Modifying the Docker Configuration File

#Write the following to /etc/docker-daemon. json
[root@master01 ~]# cat > /etc/docker/daemon.json << EOF
> {
>     "exec-opts": ["native.cgroupdriver=systemd"]
> }
> EOF

#Restart Docker
[root@master01 ~]# systemctl restart docker
Copy the code

3. Cluster software installation and configuration

There are three pieces of software to install:

  • Kubeadm: initialize cluster, manage cluster, etc
  • Kubelet: used to receive API-server instructions to manage the Pod life cycle
  • Kubectl: cluster command management tool

The versions of the three software here are: 1.17.3-0

The next step is to install the three software on each of the three hosts.

3.1 Changing software Sources

Kubernetes website to yum source is packages.cloud.google.com, access to domestic no, at this time we can use ali cloud yum warehouse mirror.

[root@master01 ~]# cat /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
       
#Verify that kubeadm can be found/ root @ master01 ~ # yum list | grep kubeadm kubeadm. X86_64 1.17.3-0Copy the code

3.2 Installing kubeadm kubelet kubectl

/ root @ master01 ~ # yum list kubeadm. X86_64 - showduplicates | sort - r/root @ master01 ~ # yum install - y kubeadm - 1.17.3-0 Kubelet - 1.17.3 kubectl 0-1.17.3-0Copy the code

3.3 Software Settings

Section 2.5 modified the Docker cgroup driver. Here kubelet needs to be consistent and the following configuration needs to be modified:

[root@master01 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
Copy the code

Kubelet can be set to start upon startup (manual startup is not required), and the cluster will start automatically after initialization.

[root@master01 ~]# systemctl enable kubelet
Copy the code

4. Prepare Kubernetes cluster container image

To avoid slow image download during cluster initialization, download the required image in advance.

4.1 Master Host Mirroring session

#View the container images used by the cluster[root@master01 ~]# kubeadm config images list k8s.gcr. IO /kube-apiserver:v1.17.3 K8s. GCR. IO/kube - controller - manager: v1.17.3 k8s. GCR. IO/kube - the scheduler: v1.17.3 k8s. GCR. IO/kube - proxy: v1.17.3 K8s. GCR. IO/pause: 3.1 k8s. GCR. IO/etcd: rule 3.4.3-0 k8s. GCR. IO/coredns: 1.6.5
#You can download it manually, similar to:[root@master01 ~]# docker pull k8s.gcr. IO /kube-apiserver:v1.17.3#Or write shell scripts to download in bulk
Copy the code

Note that due to the domestic network environment, the image of K8s.gcr. IO cannot be downloaded. Here, I use the image warehouse service of Ali Cloud to build our own Docker image.

This paper is based on the way mentioned in the article to successfully build a K8s cluster.

I have packed all the docker images needed to initialize the cluster into Baidu network disk. If you don’t want to build it yourself, you can download and import the Docker load -I file directly (refer to section 5.2 import method).

Web disk link: Link: pan.baidu.com/s/1UZ8P4N_Q… Extraction code: GXAH

If the link is invalid, please reply in the background of the public account of non-famous developers. I will update it in time after I see it.

#Image download completed[root@master01 ~]# Docker images REPOSITORY TAG ID CREATED SIZE k8s.gcr. IO/kube-Apiserver v1.17.3 9f9CA8dae837 8 K8s.gcr. IO/CoreDNS 1.6.5 0014AF41d6b1 10 minutes ago 41.6MB k8s.gcr. IO/Pause 3.1 98c2afe8d9e1 11 IO/Kube.gcr. IO/Kube.gcr. IO/Kube.gcr. IO/Kube.gcr. IO/Kube.gcr. IO/Kube.gcr. IO/Kube.gcr. IO 98b01691becf 13 minutes ago 93.4MB k8s.gcr. IO/Kube-Controller-Manager v1.17.3 5632f27ef964 14 minutes ago 93.4MB k8s.gcr. IO/Kube-Controller-Manager v1.17.3 5632f27ef964 14 minutes ago 93.4MB K8s.gcr. IO /etcd 3.4.3-0 7bbadee3c825 33 minutes ago 288MBCopy the code

4.2 Worker Host Mirroring

The docker image is already on the master host, so you can directly export the corresponding image to the worker host.

We only need pause and kube-proxy images:

#Export the Docker image to a file on master[root@master01 ~]# docker save-o kube-proxy_1.17.3.tar k8s.gcr. IO /kube-proxy:v1.17.3 [root@master01 ~]# docker save-o Pause_3. 1. The tar k8s. GCR. IO/pause: 3.1
#Copy the image file to the worker host
[root@master01 ~]# scp kube-proxy_1.17.3.tar pause_3.1.tar worker01:/root
Copy the code
#Import docker image file on worker host
[root@worker01 ~]# docker load -i kube-proxy_1.17.3.tar
[root@worker01 ~]# docker load -i pause_3.1.tar
#validation[root@worker01 ~]# Docker Images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr. IO/Pause 3.1 98c2afe8d9E1 26 minutes ago K8s.gcr. IO /kube-proxy v1.17.3 25618ead4414 27 minutes ago 116MBCopy the code

In the same way, import the two Docker images on the Worker02 host.

Kubernetes cluster initialization

We will use kubeadm to initialize the cluster.

5.1 the initialization

Operate on the master node.

[root@master01 ~]# kubeadm init --kubernetes-version=v1.17.3 --pod-network-cidr=172.16.0.0/16 - apiserver - advertise - address = 192.168.0.114
#It will take a while, but we've already downloaded the image in advance, so it won't take too long
#If the following information is displayed, the command is executed successfullyYour Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one  of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 192.168.0.114:6443 --token 2xjokn.mxcdy3sv2teg5qtr \ --discovery-token-ca-cert-hash sha256:e0289d4f18dd7f1530bad0863492546fd36d6a43617d7e8388b289f18b41a357Copy the code

You are advised to save the preceding information for subsequent operations.

5.2 Copying a Configuration File

Operate on the master node.

[root@master01 ~]# mkdir -p $HOME/.kube
[root@master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
#There is no need to change the owner and owner group. We can directly operate as root user
Copy the code

5.3 Configuring the Network

Here we use the Calico, specific can refer to this document: docs.projectcalico.org/getting-sta…

Operate on the master node.

/ root @ master01 ~ # kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yamlCopy the code

It will download a few Docker images, again in advance due to network problems. (The following image has been included in the web disk given in Section 4.1 **)

5.3.1 Downloading the Calico Docker Image
#Download the calico. Yaml/ root @ master01 ~ # wget HTTP: / / https://docs.projectcalico.org/v3.9/manifests/calico.yaml
#See which Docker images are required/ root @ master01 ~ # the calico cat. Yaml | grep image image: the calico/the cni: v3.9.5 image: the calico/the cni: v3.9.5 image: Calico/Pod2Daemon - FlexVol :v3.9.5 Image: calico/node:v3.9.5 image: calico/kube-controllers:v3.9.5#Download the Docker image as described in Section 4.1

#Download complete, check/ root @ master01 ~ # docker images | grep calico calico/kube - controllers v3.9.5 8 b9346e36939 3 minutes line 56 MB Calico/Pod2Daemon - FlexVol v3.9.53f5469b53C71 4 minutes ago 3.78 MB Calico /node v3.9.581bcacea423a 5 minutes ago 195MB 7 minutes registry.cn-hangzhou.aliyuncs.com/heqingbao-docker/calico_cni latest c9be39188163 line 167 MB calico/the cni v3.9.5 c9be39188163 7 minutes ago 167MBCopy the code
5.3.2 Modifying the Calico Resource List file
# Because Calico has some problems with its own network discovery mechanism, such as network component problems after cluster restart, here modify the discovery mechanism, add line 607 and 608
604             # Auto-detect the BGP IP address.
605             - name: IP
606               value: "autodetect"
607             - name: IP_AUTODETECTION_METHOD
608               value: "interface=enp0s3.*"
Copy the code

Note the name of the nic behind interface. I created the VM with VirtualBox. The name of the NIC is enp0S3.

Modify the pod-network-cidr specified during kubeadm initialization

621-name: CALICO_IPV4POOL_CIDR 622 Value: "172.16.0.0/16"Copy the code
5.3.3 Applying the Calico Resource Manifest file
[root@master01 ~]# kubectl apply -f calico.yaml

#After executing, let's take a look at the cluster node status
[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    master   65m   v1.17.3

#The master01 node is Ready
Copy the code

5.4 Adding Worker Nodes to the Cluster

Remember the last kubeadm join command when kubeadm was initialized?

(That token is only valid for 8 hours, after which you need to use Kubeadm to create a new one)

Worker01 = worker01

[root@worker01 ~]# kubeadm join 192.168.0.114:6443 --token 2xjokn.mxcdy3sv2teg5qtr \
    --discovery-token-ca-cert-hash sha256:e0289d4f18dd7f1530bad0863492546fd36d6a43617d7e8388b289f18b41a357

#Wait a moment and see the following output indicating success
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Copy the code

Again, go to the worker02 node and perform the same operation.

Check the cluster node status at master01.

[root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready Master 76M v1.17.3 worker01 Ready 7M55s v1.17.3 worker02 Ready < None > 9M10s v1.17.3Copy the code

OK, all three nodes are available and are in Ready state. The cluster of the three nodes is finished.

5.5 Verifying the K8s Cluster availability method

#Viewing Node Status[root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready Master 10h v1.17.3 worker01 Ready < None > 8h v1.17.3 worker02 Ready < None > 23m v1.17.3
#Check the cluster health status
[root@master01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

#Or view cluster information/ root @ master01 ~ # kubectl cluster - info Kubernetes master is running at https://192.168.0.114:6443 KubeDNS is running at https://192.168.0.114:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
#Or view the status of all pods[root@master01 ~]# kubectl get pod --namespace kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-6b9d4c8765-t2rw8 1/1 Running 1 9h calico-node-4f5sr 1/1 Running 1 9h calico-node-688ct 1/1 Running 1 26m calico-node-dhf5t 1/1 Running 2 8h coredns-6955765f44-jr5mk 1/1 Running 1 10h coredns-6955765f44-tw67g 1/1  Running 1 10h etcd-master01 1/1 Running 1 10h kube-apiserver-master01 1/1 Running 1 10h kube-controller-manager-master01 1/1 Running 1 10h kube-proxy-9k8tk 1/1 Running 1 10h kube-proxy-wkr9x 1/1 Running 2 8h kube-proxy-wxp9d 1/1 Running 1 26m kube-scheduler-master01 1/1 Running 1 10hCopy the code

You can see that the cluster is already available.


Welcome to pay attention to the public number: non-famous developers, receive more exciting content.