Install K8S using Kudeadm in macOS environment.

· Install the Docker-CE · Set the pre-environment required for K8S · Install the K8S V1.19.4 Master node · Install the K8S V1.19.4 node nodeCopy the code

My local environment:

· Operating system: MAC OS · VM: Virtual BoxCopy the code

Create two centos7 VMS using the virtual box. The ISO file version centos-7-x86_64-DVD-2009.iso is used.

Ensure that the CPU is 2 cores, the memory is more than 2 GB, the network is in bridge mode, and the rest is the default.

Master NODE IP address :192.168.31.9 (this IP address is required when you run the configuration command) Node IP address :192.168.31.99

2. Kubernetes pre-environment preparation

Systemctl disable firewalld systemctl stop Firewalld # Disable selinux # temporarily disable selinux setenforce 0 # Permanently disable Set sed -i 's/ selinux =permissive/ selinux =disabled/' /etc/sysconfig/selinux sed -i "S /SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config # Disable the swap partition swapoff -a # Disable the swap partition permanently. Open /etc/fstab to comment out the swap line. Sed -i 's/.*swap.*/#&/' /etc/fstab # change kernel parameters cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --systemCopy the code

Note that both master and Node nodes need to perform these operations

Docker installation

Y y y y y y y y y y y y Y Y Y Y Y Y https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo - O/etc/yum. Repos. D/docker - ce. '# designated installing this version of the docker - ce Yum install docker - ce - 19.03.13 - y # start docker systemctl enable docker && systemctl start dockerCopy the code

Install the K8S V1.19.4 master management node

Ensure that the second and third steps above have been performed

Install kubeadm,kubelet,kubectl

# perform configuration k8s ali YunYuan cat < < EOF > / etc/yum repos. D/kubernetes. '[kubernetes] name = kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # kubeadm installation, kubectl, kubelet yum install - y Kubectl - 1.19.4 kubeadm - 1.19.4 kubelet - 1.19.4Copy the code

Kubectl: kubectl: kubectl: kubectl: kubectl: kubectl: kubectl: kubectl: kubectl: kubectl: Kubectl: Kubectl: Kubectl: Kubectl: Kubectl

yum list installed | grep kube
Copy the code

Start the Kubelet service

Kubelet &&systemctl start kubeletCopy the code
  1. Change –apiserver-advertise-address to the IP address of the master node. Wait patiently.
Kubeadm init - image - repository registry.aliyuncs.com/google_containers - kubernetes - version v1.19.4 - apiserver - advertise - address 192.168.31.9 - service - cidr = 10.96.0.0/12 - pod - network - cidr = 10.244.0.0/16Copy the code

After the preceding operations are complete, enter as prompted, and then copy and paste.

K8s will prompt you to type the following command Perform the mkdir -p $HOME /. Kube sudo cp - I/etc/kubernetes/admin. Conf $HOME /. Kube/config sudo chown $(id - u) : $(id - g) $HOME/.kube/configCopy the code
  1. Note: Kubeadm init will return the command after the node is added to the cluster. The validity period is 24 hours. If you forget it, run the following command to obtain it again.
kubeadm token create --print-join-command
Copy the code
  1. Check that the master components are all up
kubeget get cs
Copy the code

At the beginning, only ETcd is used for health, and the other apiserver is not used. Please refer to the following link to solve the problem: blog.csdn.net/qq_38359135…

  1. After the master node is installed, run kubectl get Nodes to check whether the master node is in the NotReady state.

5. Install k8S V1.19.4 node

  1. Install kubeadm and kubelet
# perform configuration k8s ali YunYuan cat < < EOF > / etc/yum repos. D/kubernetes. '[kubernetes] name = kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # kubeadm installation, kubectl, kubelet yum install - y Kubeadm-1.19.4 kubelet-1.19.4 # start kubelet systemctl enable kubelet && systemctl start kubeletCopy the code
  1. To join a cluster, run the command returned by master node init. Each machine has a different command line. Wait a few minutes.
Kubeadm join 192.168.31.09:6443 --token sdr6ls.nfgsrbwwjc8pisc0 --discovery-token-ca-cert-hash sha256:d1f2ee5a5a14de1fb0a1a5ebe62efad8d0b541831cfb4fa86420519ce0217c78Copy the code

Installing a Flannel (Master Node)

  1. Download the kube-flannel.yml file
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Copy the code

Because the source may not be able to download, I will save the content of the file last, directly copy and paste write

  1. Install fannel

Go to kube-flannel.yml and execute it

kubectl apply -f kube-flannel.yml
Copy the code

Seven. Done

The appendix

Kube-flannel. yml file contents

--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: The cni - the conf. Json: | {" name ":" cbr0 ", "cniVersion" : "0.3.1", "plugins" : [{" type ":" flannel ", "delegate" : { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": True}}}] net - the conf. Json: | {" Network ":" 10.244.0.0/16 "and" Backend ": {" Type" : "vxlan"}} - apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: Quay. IO/coreos/flannel: v0.13.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: Quay. IO/coreos/flannel: v0.13.0 command: / opt/bin/flanneld args: - - - IP - masq kube - subnet configures - MGR resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfgCopy the code