This article explains how to quickly deploy K8S V1.18 using kubeadm.



For image download, domain name resolution and time synchronization, clickAlibaba open source mirror site

1. Basic environment, at least 1 Master and 1 Worker; 2. Basic configuration

1) Enable hardware virtualization support on the server; 2) operating system version is greater than CentOS7.5, Minimal mode, can update to the latest version; 3) Disable SElinux and Firewalld services; #sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux #setenforce 0 #systemctl disable firewalld #systemctl stop firewalld 4) Set hostname and configure local parsing in /etc/hosts; #swapoff -a #sed -i '/ Swap /d' /etc/fstab 6) Modify sysctl.conf #echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf #echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf #echo Bridge -nf-call-iptables = 1" >> /etc/sysctl.conf #sysctl -p Cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory #modprobe net_brfilter #sysctl -pCopy the code

3. Install the Docker service on all nodes

1) If an old version has been installed, it needs to be deleted:  #yum -y remove docker-client docker-client-latest docker-common docker-latest docker-logrotate docker-latest-logrotate \ docker-selinux docker-engine 2) Set up ali cloud docker warehouse, and install docker service; #yum -y install yum-utils lvm2 device-mapper-persistent-data nfs-utils xfsprogs wget #yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #yum -y install docker-ce docker-ce-cli containerd.io #systemctl enable docker #systemctl start dockerCopy the code

4. Install the K8S service on all nodes

1) If an old version has been installed, delete it: # yum - y remove kubelet kubadm kubctl 2) set up the warehouse of ali cloud, and install the new version # cat < < EOF > / etc/yum repos. D/kubernetes. '[kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF #yum -y install kubelet kubeadm kubectl Kubectl 3) Docker Cgroup Driver systemd Note If you do not change the value, an error message "Detected Cgroupfs as THS Docker driver.xx" may be displayed when Worker nodes are added and the Docker local mirror library is configured. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "registry-mirrors":[ "https://kfwkfulq.mirror.aliyuncs.com", "https://2lqq34jg.mirror.aliyuncs.com", "https://pee6w651.mirror.aliyuncs.com", "http://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com"]} 4) Restart Docker, -reload #systemctl restart docker #systemctl enable Kubelet #systemctl start KubeletCopy the code

5. Deploy Master nodes

1) If you want to initialize the Master node, run #kubeadm reset. 2) Configure environment variables: Sh #echo export APISERVER_NAME=master1.lab.com >> k8s.env.sh #sh k8s.env.sh #echo export APISERVER_NAME=master1.lab.com >> k8s.env.sh #sh k8s.env.sh 3) Initialize the Master node: #kubeadm init \ --apiserver-advertise-address 0.0.0.0 \ --apiserver-bind-port 6443 \ --cert-dir /etc/kubernetes-pki \ --control-plane-endpoint master1.lab.com \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version 1.18.2 \ --pod-network-cidr 10.11.0.0/16 \ --service-cidr 10.20.0.0/16 \ --service-dns-domain Cluster. local \ --upload-certs # Initialize control-plane /Master node, name parameter descriptionCopy the code

kubeadm init \

--apiserver-advertise-address 0.0.0.0 / apiserver-advertise-address 0.0.0.0 / apiserver-advertise-address 0.0.0.0 / apiserver-advertise-address 0.0.0.0 / apiserver-advertise-address You can use this option to specify a network adapter --apiserver-bind-port 6443 # API server bound port, default 6443 --cert-dir /etc/kubernetes/pki # path to save and store certificates, default: "/etc/kubernetes/pki" --control-plane-endpoint kuber4s. API \ # Specify a stable IP address or DNS name for the control plane. # specified here kuber4s. API has been in the/etc/hosts configuration parsing for native IP - image - repository registry.cn-hangzhou.aliyuncs.com/google_containers \ # Select the container repository to use to pull the image of control-plane. Default: Kubernetes -version 1.17.3 \ # Select a specific kubernetes version for control-plane. "Stables -1" --node-name master01 \ # Specifies the name of the node. --pod-network-cidr 10.10.0.0/16 service-cidr 10.20.0.0/16 service-cidr 10.20.0.0/16 service-cidr 10.20.0.0/16 --service-dns-domain cluster.local # Default "cluster.local" --upload-certs # upload control-plane certificate to kubeadm-certs Secret  #rm -f .kube && mkdir .kube #cp -i /etc/kubernets/admin.conf .kube/config #chown $(id -u):$(id -g) $HOME/.kube/config // Can be used to assign kubectl permissions to ordinary usersCopy the code

6, Install Calico network plug-in:

A network plug-in must be installed in the cluster to implement communication between pods. You only need to operate on the Master Node, and other nodes will automatically create related pods. Wget # https://docs.projectcalico.org/v3.8/manifests/calico.yaml the default configuration file using the IP address of the Pod for 192.168.0.0/16, need to modify is used in a cluster initialization parameter values, In this example, it is 10.10.0.0/16. #sed -i "s#192.168.0.0/16#10.10.0.0/16#" calico.yaml #kubectl apply -f calico.yaml #watch -n 2 kubectl get Pods -n kube-system -o wide #kubectl get nodes -o wide  #kubeadm token create --print-join-command > node.join.shCopy the code

7. Worker node deployment

1) If you need to initialize the Worker node, run #kubeadm reset; 2) Copy environment variables and join cluster scripts from Master: # SCP master: / root/k8s env. Sh master: / root/mode. Join. Sh. # sh k8s. The env. Sh # sh node. Join. Sh # or direct execution kubeadm join master1.lab.com:6443 --token e1xszv.7fa46uw7intwcbwi \ --discovery-token-ca-cert-hash Sha256:2637022 ef0928d0b390bf10b246ccf20e00f73966667bc711d683a8d71492e5a 3) in the Master node to see the Worker status: #kubectl get nodes -o wide 4) Delete the Worker node:Copy the code

7. Add a Master node

1) Run #kubeadm join master1.lab.com:6443 --token e1xszv.7fa46uw7intwcbwi \ --discovery-token-ca-cert-hash on the Master node to be added sha256:2637022ef0928d0b390bf10b246ccf20e00f73966667bc711d683a8d71492e5a\ --control-plane --certificate-key 5253fc7e9a4e6204d0683ed2d60db336b3ff64ddad30ba59b4c0bf40d8ccadcdCopy the code

8. Install Ingress Controller

1) Execute on Master node, Specific can refer to https://github.com/nginxinc/kubernetes-ingress/blob/v1.5.3/docs/installation.md # kubectl apply - f https://kuboard.cn/install-script/v1.16.0/nginx-ingress.yamlCopy the code

9, install Kuboard graphical management interface

# 1) in the Master node performs kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml to check the Running status, may take a few minutes to be Running status: #kubectl get Pods -l k8s.ip. Work /name=kuboard -n kube-system Used to login interface # kubectl - n kube - system get secret $(kubectl - n kube - system get secret | grep kuboard - user | awk '} {print $1) - o  go-template='{{.data.token}}' | base64 -d > admin-token.txt #kubectl -n kube-system get secret $(kubectl -n kube-system  get secret | grep kuboard-viewer | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d > read-only-token.txt 3) The management program visits http://IP address of any Worker node :32567Copy the code

Refer to the link

This article was adapted from: Using Kubeadm to rapidly deploy K8S V1.18- Ali Cloud developer community