1. Prerequisites for installation

1.1. Close Firewalld

 systemctl stop firewalld; systemctl disable firewalld 
Copy the code

1.2. Close SElinux

 setenforce 0; sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
Copy the code

1.3. Close Swap

swapoff -a; Sed -i "s/ dev /mapper /centos-swap/ #\ /dev/mapper /centos-swap/ etc/fstab You can skip checking vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false" to add arguments at initialization --ignore-preflight-errors= swapCopy the code

1.4. Use Aliyun YUM source:

 wget -O /etc/yum.repos.d/CentOS7-Aliyun.repo http://mirrors.aliyun.com/repo/Centos-7.repo  
Copy the code

1.5. Modify kernel parameters

# increase configuration/root @ master ~ # vim/etc/sysctl. Conf net. Ipv6. Conf. All. Disable_ipv6 = 0.net. Ipv6. Conf. Default. Disable_ipv6 = 0 Net. Ipv6. Conf. Lo. Disable_ipv6 = 0.net. Ipv6. Conf. All. The forwarding = 1 # into law/root @ master ~ # sysctl -p/root @ master ~ # vim /etc/sysconfig/network # add NETWORKING_IPV6=yes [root@master ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 IPV6INIT=yes IPV6_AUTOCONF=yesCopy the code

1.6. Configure local resolution

Vim /etc/hosts Indicates the local ipv6 address masterCopy the code

2. Install Docker

3. Kubernetes cluster installation

3.1 Use The Kubernetes source of Aliyun

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
Copy the code

3.2 Installing kubeadm kubelet kubectl

yum install -y kubelet kubeadm kubectl

3.3 start kubelet

systemctl enable kubelet; systemctl start kubelet

3.3 Downloading an Image

[root@node1 ~]# kubeadm config images list k8s.gcr. IO /kube-apiserver:v1.20.4 K8s. GCR. IO/kube - controller - manager: v1.20.4 k8s. GCR. IO/kube - the scheduler: v1.20.4 k8s. GCR. IO/kube - proxy: v1.20.4 K8s.gcr. IO /etcd:3.4.13-0 k8s.gcr. IO /coredns:1.7.0 Set nounset nounset nounset nounset nounset nounset nounset nounset nounset KUBE_PAUSE_VERSION= 3.2etcd_version =3.4.13-0 DNS_VERSION= 1.7.0gcr_URL =k8s.gcr. IO Images =(kube-proxy:${KUBE_VERSION} kube-scheduler:${KUBE_VERSION} kube-controller-manager:${KUBE_VERSION} ${KUBE_VERSION} pause:${KUBE_PAUSE_VERSION} etcd:${ETCD_VERSION} coreDNS :${DNS_VERSION}) ##  for imageName in ${images[@]} ; do docker pull $DOCKERHUB_URL/$imageName docker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageName docker rmi $DOCKERHUB_URL/$imageName doneCopy the code

3.4 Modifying the Initial Configuration File

[root@node1 ~]# kubeadm config print init-defaults apiVersion: kubeadm.k8s. IO /v1beta2 bootstrapTokens - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing-authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: node1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: V1.20.0 Networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 Scheduler: {}Copy the code
IO /v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 2003:ac18::30a:1 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: node1 kubeletExtraArgs: node-ip: 2003:ac18::30a:1 --- apiServer: extraArgs: advertise-address: 2003:ac18::30a:1 bind-address: '::' etcd-servers: https://[2003:ac18::30a:1]:2379 service-cluster-ip-range: fd00:10:96::/112 apiVersion: kubeadm.k8s.io/v1beta2 controllerManager: extraArgs: allocate-node-cidrs: 'true' bind-address: '::' cluster-cidr: fd00:10:16::/64 node-cidr-mask-size: '64' service-cluster-ip-range: fd00:10:96::/112 dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd extraArgs: name: master advertise-client-urls: https://[2003:ac18::30a:1]:2379 initial-advertise-peer-urls: https://[2003:ac18::30a:1]:2380 initial-cluster: master=https://[2003:ac18::30a:1]:2380 listen-client-urls: https://[2003:ac18::30a:1]:2379 listen-peer-urls: https://[2003:ac18::30a:1]:2380 kind: ClusterConfiguration networking: dnsDomain: cluster.local serviceSubnet: Scheduler: extraArgs: bind-address: '::' kubernetesVersion: v1.20.0 -- apiVersion: kubelet.config.k8s.io/v1beta1 failSwapOn: false nodeIp: 2003:ac18::30a:1 address: '::' clusterDNS: - fd00:10:96::a healthzBindAddress: '::1' healthzPort: 10248 kind: KubeletConfiguration --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: fd00:10:16::/64 mode: ipvs ipvs: minSyncPeriod: 5s syncPeriod: 5s scheduler: "wrr"Copy the code

3.5 Creating a Cluster

Kubeadm init –config=init-config-ipv6.yaml Wait until the cluster is created to check whether pod and sevice are created successfully

NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE     IP                 NODE    
kube-system   coredns-74ff55c5b-54vlp                0/1     Running   0          2m41s   fd00:10:16::6      node1   
kube-system   coredns-74ff55c5b-csw9f                0/1     Running   0          2m41s   fd00:10:16::5      node1  
kube-system   etcd-node1                             1/1     Running   0          2m41s   2003:ac18::30a:1   node1   
kube-system   kube-apiserver-node1                   1/1     Running   0          2m41s   2003:ac18::30a:1   node1   
kube-system   kube-controller-manager-node1          1/1     Running   0          2m41s   2003:ac18::30a:1   node1       
kube-system   kube-proxy-9fbb6                       1/1     Running   0          2m41s   2003:ac18::30a:1   node1   
kube-system   kube-scheduler-node1                   1/1     Running   0          2m41s   2003:ac18::30a:1   node1   
Copy the code

3.6 Installing the CNI Plug-in

Vim install.sh ipv6 =${ipv6 :-true}

Run the installation file sh install.sh to check the installation status kubectl get pod -a -owide

NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE     IP                 NODE    NOMINATED NODE   READINESS GATES
default       hello-deployment-6d48f47cf9-rrt8w      1/1     Running   0          3h23m   fd00:10:16::8      node1   <none>           <none>
default       hello-deployment-6d48f47cf9-v4rsz      1/1     Running   0          3h23m   fd00:10:16::9      node1   <none>           <none>
kube-system   coredns-74ff55c5b-54vlp                1/1     Running   0          3h25m   fd00:10:16::6      node1   <none>           <none>
kube-system   coredns-74ff55c5b-csw9f                1/1     Running   0          3h25m   fd00:10:16::5      node1   <none>           <none>
kube-system   etcd-node1                             1/1     Running   0          3h29m   2003:ac18::30a:1   node1   <none>           <none>
kube-system   kube-apiserver-node1                   1/1     Running   0          3h29m   2003:ac18::30a:1   node1   <none>           <none>
kube-system   kube-controller-manager-node1          1/1     Running   0          3h29m   2003:ac18::30a:1   node1   <none>           <none>
kube-system   kube-ovn-cni-jbb7f                     1/1     Running   0          3h26m   2003:ac18::30a:1   node1   <none>           <none>
kube-system   kube-ovn-controller-6554f7b67d-jfqrv   1/1     Running   0          3h26m   2003:ac18::30a:1   node1   <none>           <none>
kube-system   kube-ovn-pinger-wfx8j                  1/1     Running   0          3h25m   fd00:10:16::7      node1   <none>           <none>
kube-system   kube-proxy-9fbb6                       1/1     Running   0          3h29m   2003:ac18::30a:1   node1   <none>           <none>
kube-system   kube-scheduler-node1                   1/1     Running   0          3h29m   2003:ac18::30a:1   node1   <none>           <none>
kube-system   ovn-central-55f5b55587-jqqbt           2/2     Running   0          3h26m   2003:ac18::30a:1   node1   <none>           <none>
kube-system   ovs-ovn-rqnpj                          1/1     Running   0          3h26m   2003:ac18::30a:1   node1   <none>           <none>
Copy the code

4. Problems that may occur during installation

!!!!! Kubelet does not start properly. Solution: 1.systemctl status kubeletCheck whether Kubelet is running properly. If not, check whether Kubelet is running properlyjournallctl -xeu kubeletKubelet and docker have different cgroupdriver (cgroupfs or systemd). 2. Kubelet starts normally, then check whether etcd docker starts normallydocker ps -a|grep etcdName: master(hostname) Initial-cluster: master=https://[2003: AC18 ::30a:1]:2380

!!!!! Kube-ovn installation failed!! All services cannot be accessed. View kube-proxy logs

[root@node1 ~]# Kubectl log-n kube-proxy 7FX [J]. The spectrum of disease is retrieved  node IP: 2004: AC18 ::30a:1 I0225 01:42:06.301616 1 Server_others.go :139] Kube-proxy node IP is an IPv6 address (2004: AC18 ::30a:1), Assume IPv6 operation W0225 01:42:06.316024 1 Server_others. go:578] Unknown proxy mode "", Assuming iptables proxy I0225 01:42:06.316110 1 Server_others. go:185] Using Iptables proxier.w0225 01:42:06.316125 1 assuming iptables proxy I0225 01:42:06.316125 1 server_others.go:455] detect-local-mode set to ClusterCIDR, But no cluster CIDR defined I0225 01:42:06.316129 1 Server_others. go:466] detect-local-mode: ClusterCIDR, Defaulting to No-op detect-local I0225 01:42:06.317685 1 Server. go:650] Version V1.20.4 I0225 01:42:06.318164 1 Conntrack.go :52] Setting Nf_conntrack_max to 262144 I0225 01:42:06.318451 1 Go: 150] Starting Service config controller I0225 01:42:06.318469 1 Shared_informer. Go: 150] Waiting for caches to Sync for service config I0225 01:42:06.318506 1 config.go:224] Starting endpoint slice config controller I0225 01:42:06.318539 1 Shared_informer. Go :240] Waiting for caches to sync for endpoint slice config I0225 01:42:06.418631 1 Shared_informer. Go :247] Caches are synced for endpoint slice config I0225 01:42:06.418647 1 shared_informer. Go :247] Caches are synced for service configCopy the code

Error detect-local-mode set to ClusterCIDR ClusterCIDR defined in kube-proxy clusterCIDR defined in Kube-proxy The default configuration for kubeadm output has no kube-proxy related configuration

Solution: Add the following configuration to the initial configuration

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: fd00:10:16::/64
mode: ipvs
ipvs:
  minSyncPeriod: 5s
  syncPeriod: 5s
  scheduler: "wrr"
Copy the code

Execute kubeadm init –config. The following types can be configured

  • InitConfiguration
  • ClusterConfiguration
  • KubeProxyConfiguration
  • KubeletConfiguration