This paper introduces how to deploy Kubernetes 1.18.0 cluster using Kubeadm on two cloud hosts with centos 7 16.04 64 bit dual-core CPU. The network plug-in is Flannel v0.11.0 and the mirror source is Ali Cloud. This paper has some practical reference significance.

A, environmental

Cloud host, centos 7 64-bit, kernel 3.10.0, 8GB MEMORY, dual-core CPU. Environment requirements and Settings: The project directory is $HOME/k8s. All operations can be performed with root permission. Note that K8S requires the CPU of the machine to be at least dual core. The K8S version deployed for this article is 1.17.0. The deployment image and its version are as follows:

K8s. GCR. IO/kube - apiserver: v1.18.0 k8s. GCR. IO/kube - controller - manager: v1.18.0 k8s. GCR. IO/kube - the scheduler: v1.18.0 K8s. GCR. IO/kube - proxy: v1.18.0 k8s. GCR. IO/pause: 3.2 k8s. GCR. IO/etcd: rule 3.4.3-0 k8s. GCR. IO/coredns: 1.6.7 Quay. IO/coreos/flannel: v0.12.0 - amd64Copy the code

Note 1: k8s. GCR. IO using ali cloud image address registry.aliyuncs.com/google_containers replacement. Note 2: Different k8S versions are used for different deployment periods, and the corresponding component versions are also different, so you need to download again.

Install docker

Installation system tools:

yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code

Add domestic source (Aliyun) :

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Copy the code

Build cache:

yum makecache 
Copy the code

Installation:

yum install docker
Copy the code

Docker version 1.13.1 was installed in this article. Run the following command to create the /etc/docker-daemon. json file:

cat > /etc/docker/daemon.json <<-EOF
{
  "registry-mirrors": [
    "https://a8qh6yqv.mirror.aliyuncs.com",
    "http://hub-mirror.c.163.com"
  ]
}
EOF

Copy the code

Definition: registry-mirrors specifies the address of the mirror accelerator.

Start docker and view cgroup:

# systemctl start docker
# docker info | grep -i cgroup
Cgroup Driver: systemd
Copy the code

The default cgroup is systemd, the same as that of K8S.

3. Deploy k8S master host

The K8S can be deployed on a master host and a node. This section describes the master host.

3.1 close the swap

Edit the /etc/fstab file to comment out the lines mounted by the swap partition, as shown in the following example:

# swap was on /dev/sda5 during installation
UUID=aaa38da3-6e60-4e9d-bfc6-7128fd05f1c7 none swapsw  0  0
Copy the code

To perform:

# swapoff -a
Copy the code

3.2 Adding a Domestic K8S source

Select aliyun here:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

Copy the code

3.3 installation

Install kubeadm, kubectl, kubelet, kubernetes-cni and other tools.

# yum install kubeadm kubectl kubelet kubernetes-cni
Copy the code

Prompt message:

Dependencies to solve = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = source size Package framework version = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = are installing: Kubeadm x86_64 1.18.0-0 kubernetes 8.8m kubectl x86_64 1.18.0-0 kubernetes 8.5m kubectl x86_64 1.18.0-0 kubernetes 9.5m kubelet x86_64 1.18.0-0 kubernetes 21 M kubernetes-cni x86_64 0.7.5-0 kubernetes 10 M Conntrack-tools x86_64 1.4.4-5.el7_7.2 updates 187 k Cri-tools x86_64 1.13.0-0 kubernetes 5.1m libnetfilter_cthelper X86_64 1.0.0-10.el7_7.1 updates 18 k libnetFilter_ctTimeout x86_64 1.0.0-6.el7_7.1 updates 18 k libnetfilter_queue X86_64 1.0.2-2.el7_2 base 23 kCopy the code

Enter y to confirm.

Note: From the above information, the installed version is 1.18.0, kubernetes-CNI 0.7.5.

3.4 Obtaining the Image Version Required for Deployment

# kubeadm config images list
Copy the code

The output is as follows:

W0327 16:16:50.268440 3424 Configset. go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] K8s. GCR. IO/kube - apiserver: v1.18.0 k8s. GCR. IO/kube - controller - manager: v1.18.0 k8s. GCR. IO/kube - the scheduler: v1.18.0 K8s. GCR. IO/kube - proxy: v1.18.0 k8s. GCR. IO/pause: 3.2 k8s. GCR. IO/etcd: rule 3.4.3-0 k8s. GCR. IO/coredns: 1.6.7Copy the code

The preceding warning messages are ignored. This is the version of the image that kubeadm matches. Compatibility problems may occur due to different component versions.

3.5 Pulling an Image File.

In general, the image of k8s.gcr. IO cannot be downloaded directly in China. There are two methods: 1. When initializing K8S, use the address of Ali Cloud image, which can be downloaded smoothly. See the following initialization command. You can download in advance, to replace the mirror address prefix that in the previous section registry.cn-hangzhou.aliyuncs.com/google_containers can.

2. Download the above image by yourself. Use the following script to pullk8s.sh (note that the script must add the X attribute) :

#! The image below /bin/bash # should remove the prefix "k8s.gcr. IO /", Kube-apiserver :v1.18.0 kube-controller-manager:v1.18.0 Kube-proxy :v1.18.0 KUbe-proxy :v1.18.0 pause:3.2 ETCD :3.4.3-0 coreDNS :1.6.7) for imageName in ${images[@]}; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName doneCopy the code

Pull:

Chmod +x pullK8s. sh bash pullk8s.sh (or./pullk8s.sh)Copy the code

3.6 the network

Set network configuration:

Conf << -eof {"cniVersion": "0.3.0", "name": "mynet", "type": "bridge", "bridge": "cni0", "isGateway": true, "ipMasq": true, "ipam": { "type": "host-local", "subnet": "10.244.0.0/16", "routes" : [{" DST ":" 0.0.0.0/0}]}} EOF cat > / etc/the cni/net. D / 99 - loopback. Conf < < - EOF {" cniVersion ": "0.3.0", "type": "loopback"} EOFCopy the code

After practice, this step can not be done.

3.7 Downloading the Flannel Image

Docker pull quay. IO/coreos/flannel: v0.12.0 - amd64Copy the code

Note: If you cannot download, you need to use another method. Flannel Mirror Information:

# docker images | grep flannel quay. IO/coreos/flannel v0.12.0 - amd64 4 e9f801d2217 2 weekes line 52.8 MBCopy the code

Note that you have downloaded the Flannel image first. The version of the Flannel image is confirmed by the official YAML file. The address is below.

3.8 the initialization

Version 1:

Kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-bind-port=10010 \ --image-repository registry.aliyuncs.com/google_containersCopy the code

— Pod-network-cidr specifies the network segment that will be used later by the network plug-in. – the mirror image repository – specifies the address, the default for k8s. GCR. IO, designated as ali cloud image address registry.aliyuncs.com/google_containers here. –pod-network-cidr Specifies the cidR network segment. The default network segment is 192.168.0.0/16. The default network segment is 192.168. –apiserver-bind-port specifies the service port, default is 6443, because the cloud host is occupied by other programs, so change. Note that the other parameters default.

The preceding command is equivalent to the following command:

Kubeadm init \ --apiserver-advertise-address=192.168.0.102 --apiserver-bind =10010\ --image-repository Registry.aliyuncs.com/google_containers \ - kubernetes - version v1.18.0 \ - service - cidr = 10.1.0.0/16 \ - pod - network - cidr = 10.244.0.0/16Copy the code

Version 2, pull version according to the above script:

Kubeadm init - pod - network - cidr = 10.244.0.0/16Copy the code

This article uses version one deployment.

The following information is displayed during initialization:

W0327 16:35:43.258829    4726 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [izwz9hs1zswgl6frxwsnhhz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 119.23.174.153]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [izwz9hs1zswgl6frxwsnhhz localhost] and IPs [119.23.174.153 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [izwz9hs1zswgl6frxwsnhhz localhost] and IPs [119.23.174.153 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0327 16:36:10.648368    4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0327 16:36:10.649340    4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.002445 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node izwz9hs1zswgl6frxwsnhhz as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node izwz9hs1zswgl6frxwsnhhz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5nx6xk.ufqgazdygjbo31k1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 119.911.109.901:10010 --token 5nx6xk.ufqgazdygjbo31k1 \
    --discovery-token-ca-cert-hash sha256:fb2b5d905f931b999df435b6c2079fdc5d42959b6b5fb7e2f609b34c1b571a97 
Copy the code

The K8S version was first confirmed. Next, create configuration files, such as certificates. Create pod again. Finally, the command to join the cluster is prompted. It is not recommended to know the K8S concept in depth during deployment. Finally, kubeadm JOIN appears, indicating that the initialization is successful.

Kubeadm token create –print-join-command kubeadm token create –print-join-command

W0327 16:41:28.351647 6107 Configset. go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] kubeadm join 123.231.312.123:10010 - token x04h7k. Rvx3xeyc0us0aop2 -- -- the discovery - the token - ca - cert - hash sha256:fb2b5d905f931b999df435b6c2079fdc5d42959b6b5fb7e2f609b34c1b571a97Copy the code

Definition: The token value is different but the hash value is the same.

Copy the admin.conf file to the current user directory as prompted. The admin.conf file will be used later (copy it to node).

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Note: If you switch to root as a common user, $HOME is the HOME directory of the common user. The directory must be the same as that of the user. For example, to execute commands using the latelee user to switch root permission, admin.conf must be in the /home/latele/. kube directory rather than the /root/. If this step is not available, the kubectl command will prompt:

The connection to the server localhost:8080 was refused - did you specify the right host or port?
Copy the code

During initialization, if the image does not exist, the image is automatically downloaded. After initialization, the image is as follows:

# docker images REPOSITORY TAG IMAGE ID CREATED the SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940 c34f24f 41 hours line 117 MB registry.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 41 hours A line 95.3 MB registry.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060 cea7f70 41 hours line 173 MB Registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 41 hours line 162 MB Quay. IO/coreos/flannel v0.12.0 - amd64 4 e9f801d2217 2 weekes line 52.8 MB registry.aliyuncs.com/google_containers/pause 3.2 80 d28bedfe5d weekes line 683 kB registry.aliyuncs.com/google_containers/coredns 1.6.7 67 da37a9a360 8 weekes line 43.8 MB Registry.aliyuncs.com/google_containers/etcd rule 3.4.3 0 303 ce5db0e90 5 up a line of 288 MBCopy the code

The POD status is as follows:

# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7ff77c879f-mjbm9 0/1 Pending 0 6m1s coredns-7ff77c879f-x7jjn 0/1 Pending 0 6m1s etcd-izwz9hs1zswgl6frxwsnhhz 1/1 Running 0 6m10s kube-apiserver-izwz9hs1zswgl6frxwsnhhz 1/1 Running 0 6m10s kube-controller-manager-izwz9hs1zswgl6frxwsnhhz 1/1 Running 0  6m10s kube-proxy-2mxmx 1/1 Running 0 6m1s kube-scheduler-izwz9hs1zswgl6frxwsnhhz 1/1 Running 0 6m10sCopy the code

All pods are running except coreDNS whose status is Pending. This is because the network plug-in is not deployed. Flannel is used in this paper.

3.9 the deployment of flannel

Run the following command to deploy flannel:

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Copy the code

Definition: Deploy using the kube-flannel.yml file of the Flannel repository. For detailed information, such as the version number used, refer to this file. If not, you can manually download github.com/coreos/flan… Go to the current directory and run kubectl apply -f kube-flannel.yml.

# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
Copy the code

If the Flannel image does not exist during Flannel deployment, the system automatically downloads the flannel image. During the startup, the status of the Flannel changes as follows:

kube-flannel-ds-amd64-zk6np  0/1  Init:0/1   0  3s
kube-flannel-ds-amd64-zk6np  1/1  Running    0  9s
Copy the code

This step creates the CNI0 and Flannel.1 network devices. After flannel is deployed. Check the pod:

# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE GE coredns-7ff77c879f-mjbm9 1/1 Running 0 8m46s coredns-7ff77c879f-x7jjn 1/1 Running 0 8m46s etcd-izwz9hs1zswgl6frxwsnhhz 1/1 Running 0 8m55s kube-apiserver-izwz9hs1zswgl6frxwsnhhz 1/1 Running 0 8m55s kube-controller-manager-izwz9hs1zswgl6frxwsnhhz 1/1 Running 0  8m55s kube-flannel-ds-amd64-zk6np 1/1 Running 0 2m18s kube-proxy-2mxmx 1/1 Running 0 8m46s kube-scheduler-izwz9hs1zswgl6frxwsnhhz 1/1 Running 0 8m55sCopy the code

All pods are running. Note 1: Slightly different from deploying on a local Ubuntu system, coreDNS is quite normal here, probably due to cloud hosting.

The master node is deployed successfully.

Viewing flannel network information:

# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Copy the code

View the Flannel network configuration.

# cat /etc/cni/net.d/10-flannel.conflist {"name": "cbr0", "cniVersion": "0.3.1", "": [{"type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] }Copy the code

4. Node Node

The K8S can be deployed on a master host and a node. The deployment of node nodes is the same as in the previous article and is omitted here.

The resources

During deployment, refer to the following articles and adjust according to the actual situation:

  • Juejin. Cn/post / 684490…
  • zhuanlan.zhihu.com/p/46341911
  • Kubernetes. IO/docs/setup /… (official)
  • Calico Canal related: github.com/projectcali…