The machine to prepare

Three machines, one for master, two for slave configuration: 16G 16-core Docker version (18.06.01) K8S version (1.15.0) Ansible is a powerful automatic o&M tool, based on SSH, very convenient.

1. Environment Preparation (All three nodes need to be performed)

1. Run the ssh-copy-id -i ~/. Ssh/id_rsa. pub roo@ <hostname> -p 22022 2 command to enable the three servers to perform secret exemption for each other. Note: Ansible does not need to be executed on three nodes. It can be executed on the master or on a machine outside the cluster, as long as the host mapping is configured in the /etc/hosts file. Yum install -y ansible [k8s-master] 172.18.0.171 ansibLE_ssh_user =root ansibLE_ssh_port =22022// The default port number is 22 [k8S-slave] 172.18.0.172 ANSIBLE_ssh_user =root AnSIBLE_ssh_port =22022 172.18.0.173 anSIBLE_ssh_user =root Ansible_ssh_port =22022 An example command output is as follows :ansible all -m shell -a "systemctl start docker "Run the following command on the master vm: ansible k8s-master -m shell -a "systemctl start docker" 3. Selinux systemctl stop firewalld && systemctl disable firewalld sed -I 's/^ selinux =.*/ selinux =disabled/' selinux systemctl stop firewalld && systemctl disable firewalld sed -i 's/^ selinux =.*/ selinux =disabled/' /etc/selinux/config && setenforce 0 4. Close the swap partition swapoff - a temporary $# sed -i '/ swap/s / ^ \ \ (. *) $/ # \ 1 / g'/etc/fstab # 5 permanently. Change the hostname hostnamectl set-hostname master hostnamectl set-hostname slave1 hostnamectl set-hostname slave2 6 Change hosts file 172.18.0.171 master 172.18.0.172 slave1 172.18.0.173 slave2 7. Kernel tuning, Pass the bridge IPV4 traffic to the iptable chain cat > /etc/sysctl.d/k8s.conf << eofnet.bridge. bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF$ sysctl --system 8. Yum install -y ntpDate ntpdate time.windows.comCopy the code

Ii. Docker installation (all three nodes need to be executed)

Y y y y y y y y y y y y y y y y y y y y Y Y Y Y Y Y Y Y Y Y Y Y Y Y y y y y y y y y y y y y y y y y y y https://download.docker.com/linux/centos/docker-ce.repo to check the docker version yum list docker - ce - showduplicates | sort - r Docker yum install docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd. IO yum install -y docker-ce Docker -ce-cli containerd. IO start docker systemctl start docker systemctl enable Docker installation command complete the ansible all -m shell -a "yum -y install bash-completion" ansible all -m shell -a "source /etc/profile.d/bash_completion.sh"Copy the code

3. Install K8S (for all three nodes)

1. Apply for ali source docker accelerator https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors sudo mkdir -p/etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://wv8lwzcp.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker 2. Add k8s yum source cat < < EOF > / etc/yum repos. D/kubernetes. '[kubernetes] name = kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum update cache clean all & yum - y makecache "3. Kubeadm kubelet kubectl yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 systemctl enable kubeletCopy the code

4. Configure the master node.

Kubeadm init - apiserver - advertise - address = 172.18.0.171 - image - repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 If a timeout is reported, the swap partition may be open. /bin/bash rm -rf /etc/kubernetes/* rm -rf ~/.kube/* rm -rf /var/lib/etcd/* lsof -i :6443|grep -v "PID"|awk '{print "kill  -9",$2}'|sh lsof -i :10251|grep -v "PID"|awk '{print "kill -9",$2}'|sh lsof -i :10252|grep -v "PID"|awk '{print "kill -9",$2}'|sh lsof -i :10250|grep -v "PID"|awk '{print "kill -9",$2}'|sh lsof -i :2379|grep -v "PID"|awk '{print "kill -9",$2}'|sh lsof -i :2380|grep -v "PID"|awk '{print "kill -9",$2}'|sh swapoff -a && kubeadm reset && systemctl daemon-reload && systemctl restart kubelet && iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one  of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 172.18.0.171:6443 --token mi2ip9.629f41C46tvh79g1 \ --discovery-token-ca-cert-hash Sha256:649 afe0a5c0f9599a0b4a6e4baa6aac3e3e6007adf98d215f495182d31d2dfac follow the orders in accordance with the requirements/root @ master ~ # cat kube_preinstall.sh #! /bin/bash mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/. Kube /config Add child node to masterCopy the code

The master node is tainted by default and does not participate in scheduling. To remove the stain from the master node, perform the following operations:

[root@master grafana]# kubectl describe node master|grep -i taints Taints: node-role.kubernetes.io/master:NoSchedule [root@master grafana]# kubectl taint nodes master node-role.kubernetes.io/master- node/master untainted [root@master grafana]# kubectl describe node master|grep -i taints  Taints: <none>Copy the code

Add child nodes to mater (both children need to be executed)

Kubeadm join 172.18.0.171:6443 --token mi2ip9.629f41C46tvh79g1 \ --discovery-token-ca-cert-hash sha256:649afe0a5c0f9599a0b4a6e4baa6aac3e3e6007adf98d215f495182d31d2dfacCopy the code

** If the above token is forgotten, execute the following command to regenerate the SHA256 encoding of the token

[root@master ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
zxjr2d.ecnowzegec34w8vj   <invalid>   2021-02-04T13:37:09+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
d7d8d27c50c1ef63cd56e8894e154d6e2861693b8f554460df4eb6fc14ce84aa
Copy the code

Install the flannel network (master node only)

wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
Copy the code

On pit: : raw.githubusercontent.com can’t open, may be this is due to the reasons for the failure of the DNS domain name resolution. Into the following web site: www.ipaddress.com/ enter the domain name: raw.githubusercontent.com parsing out the IP address of the domain name, and add to the host in the hosts file

[root@master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 Localhost localhost.localdomain localhost6 localhost6. localDomain6 172.18.0.171 Master 172.18.0.172 slave1 172.18.0.173 Slave2 199.232.96.133 raw.githubusercontent.comCopy the code

After the image is downloaded, modify the image

Because the default value may not be pulled, you do not need to change the registery if you ensure that the registery quay. IO can be accessed. Otherwise, modify the following information: 169 serviceAccountName: flannel 170 initContainers: 171-name: Install -cni 172 image: easzlab/flannel:v0.11.0-amd64 173 command: 175 -cp 175 args: 176 - -f 177 - /etc/kube-flannel/cni-conf.json 178 - /etc/cni/net.d/10-flannel.conflist 179 volumeMounts: 180 - name: cni 181 mountPath: /etc/cni/net.d 182 - name: flannel-cfg 183 mountPath: /etc/kube-flannel/ 184 containers: 185 - name: Image: Easzlab /flannel: V0.11.0 - AMd64 187 Command: 188 - / opt/bin/flanneld modification is finished, he began to pull up the flannel mirror kubectl apply -f kube - flannel. Yml see whether be pulled ps - ef | grep flannelCopy the code

Check the cluster network status. All nodes are ready if the cluster network status is as follows

[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 25h v1.15.0 slave1 Ready < None > 25h V1.15.0 slave2 RESTARTS <none> 25h v1.15.0 [root@master ~]# kubectl get pod -n kube-system NAME Ready STATUS RESTARTS AGE coredns-bccdc95cf-cpc96 1/1 Running 0 25h coredns-bccdc95cf-d5fs2 1/1 Running 0 25h etcd-master 1/1 Running 0 25h kube-apiserver-master 1/1 Running 0 25h kube-controller-manager-master 1/1 Running 0 25h kube-flannel-ds-amd64-25ztw 1/1  Running 0 25h kube-flannel-ds-amd64-cqmx8 1/1 Running 0 25h kube-flannel-ds-amd64-f6mxw 1/1 Running 0 25h kube-proxy-mz2rb 1/1 Running 0 25h kube-proxy-nd9zp 1/1 Running 0 25h kube-proxy-s4xfh 1/1 Running 0 25h kube-scheduler-master 1/1 Running 0 25h kubernetes-dashboard-79ddd5-nchbb 1/1 Running 0 21hCopy the code

Seven, test function

Create a POD and expose the port to verify that it is accessible:  kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort [root@master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/nginx-554b9c67f9-gsgsm 1/1 Running 0 25h pod/redis-686d55dddd-lhhl8 1/1 Running 0 25h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.1.0.1 < None > 443/TCP 25H service/nginx NodePort 10.1.6.189 < None > 80:30551/TCP 25H service/redis NodePort 10.1.228.85  <none> 2379:30642/TCP 25hCopy the code

Configure Kubernetes-Dashboard

Wget HTTP: / / https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml Yaml files: [root@k8s-master ~]# vim kubernetes-dashboard.yaml files: 109 spec: 110 containers: 111-name Kubernetes-dashboard image: easzlab/kubernetes-dashboard-amd64:v1.10.1 # modify this line...... 157 spec: 158 type: NodePort # 159 ports: 160-port: 443 161 targetPort: 8443 162 NodePort: 30001 # add row 163 selector: 164 k8s-app [root@k8s-master ~]# kubectl apply -f kubernetes-dashboard-yaml access page https://172.18.0.171:30001 may not be able to access, because the original yaml file is a problem with the token. In this case, you need to manually generate a token to access the directory CD /etc/kubernetes/pki/1. Create a certificate [root@master pki]# (umask 077; openssl genrsa -out dashboard.key 2048) 2. Key -out dashboard. CSR -subj "/O=zkxy/CN=kubernetes-dashboard" 3 Use the CLUSTER CA to sign the certificate openSSL X509 -req -in dashboard. CSR -ca ca. CRT -cakey ca.key -cacreateserial -out dashboard. CRT -days 5000 4. To create good certificate to cluster use completely delete: dashboard: sudo kubectl -n kube - system delete $(sudo kubectl -n kube - system get pod -o name | grep dashboard) kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.crt=./dashboard.crt --from-file=dashboard.key=./dashboard.key -n kube-system # we need to create the certificate as secret for k8S using 5. Dashboad.yaml #apiVersion: v1 #kind: secret #metadata: # labels: # k8s-app: kubernetes-dashboard # name: kubernetes-dashboard-certs # namespace: kube-system #type: Opaque 6. Kubectl create -f kubernetes-dashboard.yaml 7 You need to create a default cluster administrator user. Kubectl create ServiceAccount zkxy-admin -n kube-system kubectl create ClusterRoleBinding Zkxy-cluster-admin -- ClusterRole =zkxy-cluster-admin -- ServiceAccount =kube-system:zkxy-admin ## Retrieve tokens from the namespace [root@master ~]# kubectl get secret -n kube-system zkxy-admin-token-4dpbz kubernetes.io/service-account-token 3 22h ## [root@master ~]# kubectl describe secret zkxy-admin-token-4dpbz-n kube-system Name: zkxy-admin-token-4dpbz Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: zkxy-admin kubernetes.io/service-account.uid: 3a169baf-55f9-4cc4-abb5-950962b2315c Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9 uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJ6a3h5LWFkbWluLXRva2VuLTRkcGJ 6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InpreHktYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2V hY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYTE2OWJhZi01NWY5LTRjYzQtYWJiNS05NTA5NjJiMjMxNWMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWF jY291bnQ6a3ViZS1zeXN0ZW06emt4eS1hZG1pbiJ9.am-UOQPoSEYWWp-CKLu5k9q3Ysh5GksRQBG9zOqNsJ2O_5zWUChdKrPPTlSTGJfz1ZiHtYRuKeRloS Kem65IbuSHSfKfI_bKqTioqpzfDQSBMh2Hz4gvmiyJw3sk2g2DRCynjFjjSWB0QDgVemMn7vEPdcnPD0AwFxW0pwSPJI--hkdSbCTfm5ZXtHsvDt4avQGP1B AVw1IWeke9XsRouHurJU9I19-14LXzUWmY7nBceMCf7pWiho68gyea3kIar0JmCMtRJHAWOyWOxojocsfIb2iDsq9eK6SqhgJjXCrDMABUMErjZ-ACIA94e3 Q1gbwfpbgihexrdfupk1z-dq Copy the above token to the page and you can access the cluster.Copy the code

Nine,

At this point, a k8S cluster of one master and two slave has been set up, during the step of a lot of pits, encountered errors do not matter, trace the root to solve it. The next article will detail how to migrate the existing microservices architecture (Spring Cloud) to the container cloud.