Kubeadm is Kubernetes official provided for the rapid installation of Kubernetes cluster tools, with each version of the release of Kubernetes will be synchronized update, Kubeadm will be on the cluster configuration of some practices to make adjustments, By experimenting with Kubeadm, you can learn some new Kubernetes official best practices on cluster configuration.

Kubeadm features are currently in beta and will enter GA status in 2018. Kubeadm features are currently in beta and will enter GA status in 2018. Kubeadm is getting closer to being usable in a production environment.

The Kubernetes cluster is a highly available cluster deployed in binary form using Ansible. Here we experience Kubeadm in Kubernetes 1.12 to follow the official best practices on cluster initialization and configuration. Further refine our Ansible deployment scripts.

1. Prepare

1.1 System Configuration

Before the installation, make the following preparations. The two CentOS 7.4 hosts are as follows:


     
  1. cat /etc/hosts

  2. 192.168.61.11 node1

  3. 192.168.61.12 2

Copy the code

If the firewall is enabled on each host, you need to open the ports required by each component of Kubernetes. You can see the “Check Required Ports” section of Installing Kubeadm. Here is a simple way to disable the firewall on each node:


     
  1. systemctl stop firewalld

  2. systemctl disable firewalld

Copy the code

Disable SELINUX:


     
  1. setenforce 0

Copy the code

     
  1. vi /etc/selinux/config

  2. SELINUX=disabled

Copy the code

Create the /etc/sysctl.d/k8s.conf file and add the following information:


     
  1. net.bridge.bridge-nf-call-ip6tables = 1

  2. net.bridge.bridge-nf-call-iptables = 1

  3. net.ipv4.ip_forward = 1

Copy the code

Run the following command to make the modification take effect.


     
  1. modprobe br_netfilter

  2. sysctl -p /etc/sysctl.d/k8s.conf

Copy the code

1.2 installation Docker

Kubernetes uses the Container Runtime Interface (CRI) since 1.6. The default container runtime is still Docker, using the built-in Dockershim CRI implementation in Kubelet.

Install docker yum


     
  1. yum install -y yum-utils device-mapper-persistent-data lvm2

  2. yum-config-manager \

  3.    --add-repo \

  4.    https://download.docker.com/linux/centos/docker-ce.repo

Copy the code

Check out the latest Version of Docker:


     
  1. yum list docker-ce.x86_64  --showduplicates |sort -r

  2. Docker - ce. X86_64 18.06.1. Ce - 3. El7 docker - ce - stable

  3. Docker - ce. X86_64 18.06.0. Ce - 3. El7 docker - ce - stable

  4. Docker - ce. X86_64 18.03.1. Ce - 1. El7. Centos docker - ce - stable

  5. Docker - ce. X86_64 18.03.0. Ce - 1. El7. Centos docker - ce - stable

  6. Docker - ce. X86_64 17.12.1. Ce - 1. El7. Centos docker - ce - stable

  7. Docker - ce. X86_64 17.12.0. Ce - 1. El7. Centos docker - ce - stable

  8. Docker - ce. X86_64 17.09.1. Ce - 1. El7. Centos docker - ce - stable

  9. Docker - ce. X86_64 17.09.0. Ce - 1. El7. Centos docker - ce - stable

  10. Docker - ce. X86_64 17.06.2. Ce - 1. El7. Centos docker - ce - stable

  11. Docker - ce. X86_64 17.06.1. Ce - 1. El7. Centos docker - ce - stable

  12. Docker - ce. X86_64 17.06.0. Ce - 1. El7. Centos docker - ce - stable

  13. Docker - ce. X86_64 17.03.3. Ce - 1. El7 docker - ce - stable

  14. Docker - ce. X86_64 17.03.2. Ce - 1. El7. Centos docker - ce - stable

  15. Docker - ce. X86_64 17.03.1. Ce - 1. El7. Centos docker - ce - stable

  16. Docker - ce. X86_64 17.03.0. Ce - 1. El7. Centos docker - ce - stable

Copy the code

Kubernetes 1.12 has been verified against Docker versions 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06, etc. Note that Kubernetes 1.12 supports a minimum Docker version of 1.11.1. Here we install docker version 18.06.1 on each node.


     
  1. yum makecache fast

  2. yum install -y --setopt=obsoletes=0 \

  3. Docker - ce - 18.06.1. Ce - 3. El7

  4. systemctl start docker

  5. systemctl enable docker

Copy the code

Verify that the default policy (PLlicy) for the iptables Filter FOWARD chain is ACCEPT.


     
  1. iptables -nvL

  2. Chain INPUT (policy ACCEPT 263 packets, 19209 bytes)

  3. pkts bytes target     prot opt in     out     source               destination

  4. Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)

  5. pkts bytes target     prot opt in     out     source               destination

  6.    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0

  7.    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0

  8.    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED

  9.    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0

  10. 0 0 ACCEPT all -- docker0 ! Docker0 0.0.0.0/0 0.0.0.0/0

  11. ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0

Copy the code

Since version 1.13 Docker has adjusted the default firewall rules to disable the FOWARD chain in the Iptables Filter, which causes Pod communication across nodes in the Kubernetes cluster to fail. However, by installing Docker 1806 here, we found that the default policy was changed back to ACCEPT. I don’t know which version was changed back, because the online version of 1706 still needs to manually adjust this policy.

2. Deploy Kubernetes using kubeadm

2.1 Installing kubeadm and kubelet

Install kubeadm and kubelet on each node:


     
  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo

  2. [kubernetes]

  3. name=Kubernetes

  4. baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

  5. enabled=1

  6. gpgcheck=1

  7. repo_gpgcheck=1

  8. gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

  9.        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

  10. EOF

Copy the code

Test the address https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 is available, if need science to get to the Internet is not available.


     
  1. curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

Copy the code

     
  1. yum makecache fast

  2. yum install -y kubelet kubeadm kubectl

  3. .

  4. Installed:

  5. X86_64 0:1.12.0-0 kubectl.x86_64 0:1.12.0-0 kubelet.x86_64 0:1.12.0-0

  6. Dependency Installed:

  7. Cri-tools.x86_64 0:1.11.1-0 Kubernetes-cni.x86_64 0:0.6.0-0 SOcat.x86_64 0:1.7.3.2-2.el7

Copy the code

Kubernetes-cni; socat; kubernetes-cnI;

The cnI dependency was officially upgraded to version 0.6.0 starting with Kubernetes 1.9, which is still the current version in 1.12

Socat is kubelet’s dependency

Cri-tools is a command line tool used to run the Container Runtime Interface (CRI)

Run kubelet –help to see that most of kubelet’s command line flag parameters have been DEPRECATED. For example:


     
  1. .

  2. --address 0.0.0.0   The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)

  3. .

Copy the code

It is recommended that we use –config to specify the configuration file and specify the original configuration of these flags in the configuration file. Set Kubelet parameters via a config file Reconfigure a Node’s Kubelet in a Live Cluster Reconfigure a Node’s Kubelet in a Live Cluster

The kubelet configuration file must be in JSON or YAML format, as shown here.

Kubernetes 1.8 requires that the system Swap be disabled. If it is not disabled, kubelet will not start in default configuration.

You can disable the Swap function as follows:

Swapoff -a Modify the /etc/fstab file to comment out the automatic mount of SWAP. Run the free -m command to confirm that SWAP is disabled. Add the following line to /etc/sysctl.d/k8s.conf:

Vm. swappiness=0 Run the sysctl -p /etc/sysctl.d/k8s.conf command to make the change take effect.

Because this is used to test other services running on the two hosts, closing swap may have an impact on other services, so the configuration of Kubelet is modified to remove this restriction. In previous versions of Kubernetes we used kubelet’s fail-swap-on=false to remove this limitation. As analyzed earlier, Kubernetes no longer recommends using startup parameters, but configuration files instead. So let’s change this to configuration file configuration.

View/etc/systemd/system/kubelet. Service. D / 10 – kubeadm. Conf, see the following content:


     
  1. # Note: This dropin only works with Kubeadm and Kubelet V1.11 +

  2. [Service]

  3. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"

  4. Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"

  5. # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically

  6. EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env

  7. # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use

  8. # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.

  9. EnvironmentFile=-/etc/sysconfig/kubelet

  10. ExecStart=

  11. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

Copy the code

Shown above kubeadm deployed kubelet configuration file – config = / var/lib/kubelet/config yaml, practical to check/var/lib/kubelet and the config yaml configuration file could not be created. We can assume that this configuration file will be generated automatically when we run kubeadm to initialize the cluster, and that the first initialization of the cluster would have failed if we hadn’t shut down Swap.

So let’s go back to using kubelet’s fail-swap-on=false to remove the limitation that swap must be disabled. /etc/sysconfig/kubelet


     
  1. KUBELET_EXTRA_ARGS=--fail-swap-on=false

Copy the code

2.2 Using kubeadm init to Initialize a cluster

Start kubelet service on each node:


     
  1. systemctl enable kubelet.service

Copy the code

Next use kubeadm to initialize the cluster, select node1 as the Master Node, and execute the following command on node1:


     
  1. kubeadm init \

  2. - kubernetes - version = v1.12.0 \

  3. - pod - network - cidr = 10.244.0.0/16 \

  4. - apiserver - advertise - address = 192.168.61.11

Copy the code

Because we chose Flannel as a Pod networking plug-in, the command above specifies -pod-network-cidr =10.244.0.0/16. The execution reported the following error:


     
  1. Using Kubernetes version: v1.12.0

  2. [preflight] running pre-flight checks

  3. [preflight] Some fatal errors occurred:

  4.        [ERROR Swap]: running with swap on is not supported. Please disable swap

  5. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=... `

Copy the code

An error message is running with swap on is not supported. Please disable swap. Since we decided to configure failSwapOn: false, re-add the — ignore-preflight-errors=Swap argument to ignore this error and run again.


     
  1. kubeadm init \

  2. - kubernetes - version = v1.12.0 \

  3. - pod - network - cidr = 10.244.0.0/16 \

  4. - apiserver - advertise - address = 192.168.61.11 \

  5.   --ignore-preflight-errors=Swap

  6. Using Kubernetes version: v1.12.0

  7. [preflight] running pre-flight checks

  8.        [WARNING Swap]: running with swap on is not supported. Please disable swap

  9. [preflight/images] Pulling images required for setting up a Kubernetes cluster

  10. [preflight/images] This might take a minute or two, depending on the speed of your internet connection

  11. [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'

  12. [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  13. [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  14. [preflight] Activating the kubelet service

  15. [certificates] Generated etcd/ca certificate and key.

  16. [certificates] Generated etcd/peer certificate and key.

  17. [certificates] etCD /peer serving Cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1] [certificates] ETCD /peer serving Cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1]

  18. [certificates] Generated apiserver-etcd-client certificate and key.

  19. [certificates] Generated etcd/server certificate and key.

  20. [certificates] ETCD /server serving Cert is signed for DNS names [node1 localhost] and IPs [127.0.0.1 ::1]

  21. [certificates] Generated etcd/healthcheck-client certificate and key.

  22. [certificates] Generated ca certificate and key.

  23. [certificates] Generated apiserver certificate and key.

  24. [certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default Kubernetes. Default. SVC kubernetes. Default. SVC. Cluster. The local] and IPs [10.96.0.1 192.168.61.11]

  25. [certificates] Generated apiserver-kubelet-client certificate and key.

  26. [certificates] Generated front-proxy-ca certificate and key.

  27. [certificates] Generated front-proxy-client certificate and key.

  28. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"

  29. [certificates] Generated sa key and public key.

  30. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"

  31. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

  32. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"

  33. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"

  34. [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"

  35. [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"

  36. [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"

  37. [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"

  38. [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"

  39. [init] this might take a minute or longer if the control plane images have to be pulled

  40. [apiclient] All control plane components are healthy after 26.503672 seconds

  41. [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

  42. [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in [Kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster

  43. [markmaster] Marking the node node1 as master by adding the label "node-role.kubernetes.io/master=''"

  44. [markmaster] Marking the node node1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]

  45. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation

  46. [bootstraptoken] using token: zalj3i.q831ehufqb98d1ic

  47. [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

  48. [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

  49. [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

  50. [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace

  51. [addons] Applied essential addon: CoreDNS

  52. [addons] Applied essential addon: kube-proxy

  53. Your Kubernetes master has initialized successfully!

  54. To start using your cluster, you need to run the following as a regular user:

  55.  mkdir -p $HOME/.kube

  56.  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  57.  sudo chown $(id -u):$(id -g) $HOME/.kube/config

  58. You should now deploy a pod network to the cluster.

  59. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  60.  https://kubernetes.io/docs/concepts/cluster-administration/addons/

  61. You can now join any number of machines by running the following on each node

  62. as root:

  63. Kubeadm join 192.168.61.11:6443 --token zalj3i.q831ehufqb98d1ic -- discovery-tok-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa

Copy the code

The completed initialization output is recorded above and basically shows the key steps required to manually initialize and install a Kubernetes cluster.

Among them are the following key elements:

[kubelet] generated kubelet configuration file “/ var/lib/kubelet/config yaml” various related certificates (certificates) to generate [kubeconfig] generate kubeconfig related files The following command is used to configure how regular users use kubectl to access the cluster: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/. Kube /config Kubeadm join 192.168.61.11:6443 –token zalj3i.q831ehufqb98d1ic –discovery-token-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa

Check the cluster status:


     
  1. kubectl get cs

  2. NAME                 STATUS    MESSAGE              ERROR

  3. controller-manager   Healthy   ok

  4. scheduler            Healthy   ok

  5. etcd-0               Healthy   {"health": "true"}

Copy the code

Verify that all components are in a healthy state.

If you encounter problems with cluster initialization, you can use the following command to clean up the problems:


     
  1. kubeadm reset

  2. ifconfig cni0 down

  3. ip link delete cni0

  4. ifconfig flannel.1 down

  5. ip link delete flannel.1

  6. rm -rf /var/lib/cni/

Copy the code

2.3 Installing a Pod Network

Next install flannel Network Add-on:


     
  1. mkdir -p ~/k8s/

  2. cd ~/k8s

  3. wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  4. kubectl apply -f  kube-flannel.yml

  5. clusterrole.rbac.authorization.k8s.io/flannel created

  6. clusterrolebinding.rbac.authorization.k8s.io/flannel created

  7. serviceaccount/flannel created

  8. configmap/kube-flannel-cfg created

  9. daemonset.extensions/kube-flannel-ds-amd64 created

  10. daemonset.extensions/kube-flannel-ds-arm64 created

  11. daemonset.extensions/kube-flannel-ds-arm created

  12. daemonset.extensions/kube-flannel-ds-ppc64le created

  13. daemonset.extensions/kube-flannel-ds-s390x created

Copy the code

Note here kube – flannel. Yml is a mirror image of this file in flannel 0.10.0, quay. IO/coreos/flannel: v0.10.0 – amd64

If a Node has multiple network adapters, run the –iface parameter in kube-flannel.yml to specify the name of the network adapter on the cluster host. Otherwise, DNS resolution may fail. You need to download kube-flannel.yml locally, flanneld startup parameter plus –iface=


     
  1. .

  2. containers:

  3.      - name: kube-flannel

  4. Image: quay. IO/coreos/flannel: v0.10.0 - amd64

  5.        command:

  6.        - /opt/bin/flanneld

  7.        args:

  8.        - --ip-masq

  9.        - --kube-subnet-mgr

  10.        - --iface=eth1

  11. .

Copy the code

Daemonset deployment of the Flannel:


     
  1. kubectl get ds -l app=flannel -n kube-system

  2. NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE

  3. kube-flannel-ds-amd64     0         0         0       0            0           beta.kubernetes.i/oarch=amd64     17s

  4. kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       17s

  5. kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     17s

  6. kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   17s

  7. kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     17s

Copy the code

In combination with Kube-Flannel. Yml, fannel’s official deployment YAML document is to create five DaemonSet onset systems for different platforms in the cluster, via Node Label beta.kubernetes.i/oarch, Start the Flannel container on nodes of different platforms. The current node1 node is beta.kubernetes. I /oarch=amd64, so the DESIRED quantity of Kube-flannel-DS-amd64, a DaemonSet node, should be 1. Kube-flannel. yML kube-flannel-DS-AMD64

                                                        
     
  1. spec :

  2. template:

  3. metadata :

  4. labels :

  5. tier : node

  6. app : flannel

  7. spec :

  8. hostNetwork : true

  9. nodeSelector :

  10. beta .kubernetes .io /arch : amd64

  11. tolerations :

  12. - key : node -role .kubernetes .io /master

  13. operator: Exists

  14. effect : NoSchedule

Copy the code

The nodeSelector and tolerations associated with scheduling for Kube-flannel-DS-AMD64 have been correctly configured in KUbe-flannel. yML, Is this DaemonSet Pod scheduling to Label for the beta. The kubernetes. IO/arch: amd64, and tolerate node – role. Kubernetes. IO/master: NoSchedule stains on the node. Based on previous deployment experience, the current primary node node1 should be satisfied, but is it now? Let’s look at the basic information about node1:

                                                            
     
  1. kubectl describe node node1

  2. Name : node1

  3. Roles : master

  4. Labels : beta .kubernetes .io /arch =amd64

  5. beta .kubernetes .io /os =linux

  6. kubernetes .io /hostname =node1

  7. node -role .kubernetes .io /master =

  8. Annotations : kubeadm .alpha .kubernetes .io /cri -socket : /var/run /dockershim .sock

  9. node .alpha .kubernetes .io /ttl : 0

  10. volumes .kubernetes .io /controller -managed -attach -detach : true

  11. CreationTimestamp : Wed, 03 Oct 2018 09 :03 :04 + 0800

  12. Taints : node -role .kubernetes .io /master :NoSchedule

  13. node .kubernetes .io /not -ready :NoSchedule

  14. Unschedulable : false

Copy the code

Node.kubernetes.io /not-ready:NoSchedule. If the node is not ready, it will not accept the schedule. Nodes will not be ready if the Kubernetes network plugin has not been deployed. Yaml to tolerate node.kubernetes. IO /not-ready:NoSchedule:

                                                                
     
  1. tolerations :

  2. - key : node -role .kubernetes .io /master

  3. operator: Exists

  4. effect : NoSchedule

  5. - key : node .kubernetes .io /not -ready

  6. operator: Exists

  7. effect : NoSchedule

Copy the code

Kubectl apply-f kube-flannel. Yml kubectl apply-f kube-flannel.

Use Kubectl get Pod –all-namespaces -o wide to make sure all pods are Running.


     
  1. kubectl get pod --all-namespaces -o wide

  2. NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE

  3. Kube-system coredns-576cbf47c7-njt7l 1/1 Running 0 12m 10.244.0.3node1 <none>

  4. Kube-system coredns-576cbf47c7-vg2gd 1/1 Running 0 12m 10.244.0.2node1 <none>

  5. Kube-system etcd-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>

  6. Kube-system kube-apiserver-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>

  7. Kube-system kube-controller-manager-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>

  8. Kube-system kube-flannel-ds-amd64-bxtqh 1/1 Running 0 2m 192.168.61.11 node1 <none>

  9. Kube-system kube-proxy-fb542 1/1 Running 0 12m 192.168.61.11 node1 <none>

  10. Kube-system kube-scheduler-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>

Copy the code

Node.kubernetes. IO /not-ready:NoSchedule will soon be set correctly. See https://github.com/coreos/flannel/issues/1044.

2.4 Master Node participating in workload

For a cluster initialized with kubeadm, pods are not scheduled to the Master Node for security reasons, meaning that the Master Node does not participate in the workload. This is because the current node1 master node is the node – role. Kubernetes. IO/master: NoSchedule stain:


     
  1. kubectl describe node node1 | grep Taint

  2. Taints:             node-role.kubernetes.io/master:NoSchedule

Copy the code

Since this is set up as a test environment, remove this stain and make Node1 participate in the workload:


     
  1. kubectl taint nodes node1 node-role.kubernetes.io/master-

  2. node "node1" untainted

Copy the code

2.5 test the DNS

                                                                        
     
  1. kubectl run curl --image =radial /busyboxplus :curl -it

  2. kubectl run --generator =deployment /apps .v1beta1 is DEPRECATED and will be removed in a future version . Use kubectl create instead .

  3. If you don't see a command prompt, try pressing enter.

  4. [ root@curl-5cc7b478b6-r997p:/ ]$

Copy the code

Run nslookup kubernetes.default to check that the resolution is normal.


     
  1. nslookup kubernetes.default

  2. Server:    10.96.0.10

  3. Address: 1 10.96.0.10 kube - DNS. Kube - system. SVC. Cluster. The local

  4. Name:      kubernetes.default

  5. Address: 1 10.96.0.1 kubernetes. Default. SVC. Cluster. The local

Copy the code

2.6 Adding Nodes to the Kubernetes Cluster

We added node2 to the Kubernetes cluster, because we also removed the kubelet startup parameter that must disable swap, so we also need –ignore-preflight-errors= swap. Execute on node2:


     
  1. Kubeadm join 192.168.61.11:6443 --token zalj3i.q831ehufqb98d1ic -- discovery-tok-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa \

  2. --ignore-preflight-errors=Swap

  3. [preflight] running pre-flight checks

  4.        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]

  5. you can solve this problem with following methods:

  6. 1. Run 'modprobe -- ' to load missing kernel modules;

  7. 2. Provide the missing builtin kernel ipvs support

  8.        [WARNING Swap]: running with swap on is not supported. Please disable swap

  9. [discovery] Trying to connect to API Server "192.168.61.11:6443"

  10. [discovery] Created cluster - info discovery client, requesting the info from "https://192.168.61.11:6443"

  11. [discovery] Requesting the info from "https://192.168.61.11:6443" again to validate the TLS against the pinned the public key

  12. [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.61.11:6443"

  13. [discovery] Successfully established connection with API Server "192.168.61.11:6443"

  14. [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace

  15. [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  16. [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  17. [preflight] Activating the kubelet service

  18. [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...

  19. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation

  20. This node has joined the cluster:

  21. * Certificate signing request was sent to apiserver and a response was received.

  22. * The Kubelet was informed of the new secure connection details.

  23. Run 'kubectl get nodes' on the master to see this node join the cluster.

Copy the code

Node2 is successfully added to the cluster. Run the following command on the master node to view the nodes in the cluster:


     
  1. kubectl get nodes

  2. NAME      STATUS    ROLES     AGE       VERSION

  3. Node1 Ready Master 26M V1.12.0

  4. Node2 Ready < None > 2m v1.12.0

Copy the code

To remove node2 from the cluster, run the following command:

Execute on the master node:


     
  1. kubectl drain node2 --delete-local-data --force --ignore-daemonsets

  2. kubectl delete node node2

Copy the code

Execute on node2:


     
  1. kubeadm reset

  2. ifconfig cni0 down

  3. ip link delete cni0

  4. ifconfig flannel.1 down

  5. ip link delete flannel.1

  6. rm -rf /var/lib/cni/

Copy the code

Execute on node1:


     
  1. kubectl delete node node2

Copy the code

3. Deployment of common components in Kubernetes

More and more companies and teams are using Helm as a package manager for Kubernetes, and we will also use Helm to install common components of Kubernetes.

3.1 Installing the Helm

Helm is composed of client command Helm command tool and server tiller. Helm installation is very simple. /usr/local/bin for master node node1


     
  1. Wget HTTP: / / https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz

  2. Tar - ZXVF helm - v2.11.0 - Linux - amd64. Tar. Gz

  3. cd linux-amd64/

  4. cp helm /usr/local/bin/

Copy the code

In order to install server tiller, you also need to configure the Kubectl tools and kubeconfig files on this machine to ensure that the Kubectl tools can access apiserver and work properly on this machine. The node1 node here is configured with Kubectl.

Because Kubernetes APIServer has enabled RBAC access control, you need to create the service Account: tiller for tiller and assign the appropriate roles to it. For details, see role-based Access Control in the HELM documentation. For the sake of simplicity, cluster-admin is directly assigned the ClusterRole built into the cluster. Create rbac-config.yaml file:

                                                                                        
     
  1. apiVersion : v1

  2. kind : ServiceAccount

  3. metadata :

  4. name : tiller

  5. namespace: kube- system

  6. ---

  7. apiVersion : rbac .authorization .k8s .io /v1beta1

  8. kind : ClusterRoleBinding

  9. metadata :

  10. name : tiller

  11. roleRef :

  12. apiGroup : rbac .authorization .k8s .io

  13. kind : ClusterRole

  14. name : cluster -admin

  15. subjects :

  16. - kind : ServiceAccount

  17. name : tiller

  18. namespace: kube- system

Copy the code

     
  1. kubectl create -f rbac-config.yaml

  2. serviceaccount/tiller created

  3. clusterrolebinding.rbac.authorization.k8s.io/tiller created

Copy the code

Next deploy tiller using helm:


     
  1. helm init --service-account tiller --skip-refresh

  2. Creating /root/.helm

  3. Creating /root/.helm/repository

  4. Creating /root/.helm/repository/cache

  5. Creating /root/.helm/repository/local

  6. Creating /root/.helm/plugins

  7. Creating /root/.helm/starters

  8. Creating /root/.helm/cache/archive

  9. Creating /root/.helm/repository/repositories.yaml

  10. Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com

  11. Adding local repo with URL: http://127.0.0.1:8879/charts

  12. $HELM_HOME has been configured at /root/.helm.

  13. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

  14. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.

  15. To prevent this, run `helm init` with the --tiller-tls-verify flag.

  16. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

  17. Happy Helming!

Copy the code

Tiller is deployed under the namespace kube-system in the K8S cluster by default:


     
  1. kubectl get pod -n kube-system -l app=helm

  2. NAME                             READY   STATUS    RESTARTS   AGE

  3. tiller-deploy-6f6fd74b68-kk2z9   1/1     Running   0          3m17s

  4. helm version

  5. Client: & version. Version {GitCommit SemVer: "v2.11.0" : "2 e55dbe1fdb5fdb96b75ff144a339489417b146b," GitTreeState: "clean"}

  6. Server: & version. Version {GitCommit SemVer: "v2.11.0" : "2 e55dbe1fdb5fdb96b75ff144a339489417b146b," GitTreeState: "the clean

Copy the code

Note for some reason need to network can access GCR. IO and kubernetes-charts.storage.googleapis.com, If not, you can use the tiller image in the private image repository by helm init — service-Account tiller –tiller-image /tiller:v2.11.0 –skip-refresh

3.2 Deploying Nginx Ingress using the Helm

To make it easier to expose the services in the cluster to the outside of the cluster and access them from outside the cluster, next deploy Nginx Ingress on Kubernetes using Helm. The Nginx Ingress Controller is deployed on Kubernetes’ edge nodes. For details on the high availability of Kubernetes’ edge nodes, see the high availability of Kubernetes Ingress’s edge nodes in Bare Metal. For simplicity, there is only one edge node.

We will use node1(192.168.61.11) as the edge node and Label it:


     
  1. kubectl label node node1 node-role.kubernetes.io/edge=

  2. node/node1 labeled

  3. kubectl get node

  4. NAME    STATUS   ROLES         AGE     VERSION

  5. Node1 Ready Edge, Master 46M V1.12.0

  6. Node2 Ready < None > 22m v1.12.0

Copy the code

Stable /nginx-ingress chart values file ingress-nginx.yaml:


     
  1. controller:

  2.  service:

  3.    externalIPs:

  4.      - 192.168.61.11

  5.  nodeSelector:

  6.    node-role.kubernetes.io/edge: ''

  7.  tolerations:

  8.      - key: node-role.kubernetes.io/master

  9.        operator: Exists

  10.        effect: NoSchedule

  11. defaultBackend:

  12.  nodeSelector:

  13.    node-role.kubernetes.io/edge: ''

  14.  tolerations:

  15.      - key: node-role.kubernetes.io/master

  16.        operator: Exists

  17.        effect: NoSchedule

Copy the code

     
  1. helm repo update

  2. helm install stable/nginx-ingress \

  3. -n nginx-ingress \

  4. --namespace ingress-nginx  \

  5. -f ingress-nginx.yaml

Copy the code

     
  1. kubectl get pod -n ingress-nginx -o wide

  2. NAME                                             READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE

  3. Nginx-ingress-controller-7577b57874-m4zkv 1/1 Running 0 9m13s 10.244.0.10node1 <none> nginx-ingress-controller-7577b57874-m4zkv 1/1 Running 0 9m13s 10.244.0.10node1 <none>

  4. Nginx-ingress-default-backend -684f76869d-9jgtl 1/1 Running 0 9m13s 10.244.0.9node1 <none>

Copy the code

If default Backend is returned after accessing http://192.168.61.11, the deployment is complete:


     
  1. The curl http://192.168.61.11/

  2. default backend - 404

Copy the code

3.2 Configuring the TLS Certificate in Kubernetes

HTTPS certificates are required when using Ingress to expose HTTPS services outside the cluster. Here, the certificate and key of *.frognew.com are configured into Kubernetes.

This certificate will be used by dashboards later deployed in the kube-system namespace, so create secret for the certificate in kube-system first


     
  1. kubectl create secret tls frognew-com-tls-secret --cert=fullchain.pem --key=privkey.pem -n kube-system

  2. secret/frognew-com-tls-secret created

Copy the code

3.3 Deploying Dashboard Using the Helm

Kubernetes – dashboard. Yaml:


     
  1. ingress:

  2.  enabled: true

  3.  hosts:

  4.    - k8s.frognew.com

  5.  annotations:

  6.    nginx.ingress.kubernetes.io/ssl-redirect: "true"

  7.    nginx.ingress.kubernetes.io/secure-backends: "true"

  8.  tls:

  9.    - secretName: frognew-com-tls-secret

  10.      hosts:

  11.      - k8s.frognew.com

  12. rbac:

  13.  clusterAdminRole: true

Copy the code

     
  1. helm install stable/kubernetes-dashboard \

  2. -n kubernetes-dashboard \

  3. --namespace kube-system  \

  4. -f kubernetes-dashboard.yaml

Copy the code

     
  1. kubectl -n kube-system get secret | grep kubernetes-dashboard-token

  2. kubernetes-dashboard-token-tjj25                 kubernetes.io/service-account-token   3         37s

  3. kubectl describe -n kube-system secret/kubernetes-dashboard-token-tjj25

  4. Name:         kubernetes-dashboard-token-tjj25

  5. Namespace:    kube-system

  6. Labels:       <none>

  7. Annotations:  kubernetes.io/service-account.name=kubernetes-dashboard

  8.              kubernetes.io/service-account.uid=d19029f0-9cac-11e8-8d94-080027db403a

  9. Type:  kubernetes.io/service-account-token

  10. Data

  11. = = = =

  12. namespace:  11 bytes

  13. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9 uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1 0b2tlbi10amoyNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt 1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQxOTAyOWYwLTljYWMtMTFlOC04ZDk0LTA4MDAyN2RiNDAzYSIsInN 1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.w1HZrtBOhANdqSRLNs22z8dQWd5IOCpEl9VyWQ 6DUwhHfgpAlgdhEjTqH8TT0f4ftu_eSPnnUXWbsqTNDobnlxet6zVvZv1K-YmIO-o87yn2PGIrcRYWkb-ADWD6xUWzb0xOxu2834BFVC6T5p5_cKlyo5dwer dXGEMoz9OW0kYvRpKnx7E61lQmmacEeizq7hlIk9edP-ot5tCuIO_gxpf3ZaEHnspulceIRO_ltjxb8SvqnMglLfq6Bt54RpkUOFD1EKkgWuhlXJ8c9wJt_b iHdglJWpu57tvOasXtNWaIzTfBaTiJ3AJdMB_n0bQt5CKAUnKBhK09NP3R0Qtqog

Copy the code

Use the preceding token to log in to the dashboard in the login window.

The picture

3.4 Deploying metrics- Server using Helm

Can be seen in the Heapster making https://github.com/kubernetes/heapster has, Heapster has been DEPRECATED. Here is the Deprecation timeline for heapster. It can be seen that Heapster will be removed from various Kubernetes installation scripts starting with Kubernetes 1.12.

Kubernetes is recommended to use the metrics – server (https://github.com/kubernetes-incubator/metrics-server). We also use helm to deploy metrics-Server here.

metrics-server.yaml:


     
  1. args:

  2. - --logtostderr

  3. - --kubelet-insecure-tls

Copy the code

     
  1. helm install stable/metrics-server \

  2. -n metrics-server \

  3. --namespace kube-system \

  4. -f metrics-server.yaml

Copy the code

After deployment, check the metrics-server log and report the following error:


     
  1. E1003 05:46:13.757009 1 Manager. go:102] Unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:node1: unable to fetch metrics from Kubelet node1 (node1): Get https://node1:10250/stats/summary/: dial TCP: lookup node1 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:node2: unable to fetch metrics from Kubelet node2 (node2): Get https://node2:10250/stats/summary/: dial tcp: Lookup node2 on 10.96.0.10:53: read UDP 10.244.1.6:45288->10.96.0.10:53: I/O timeout] lookup node2 on 10.96.0.10:53: read UDP 10.244.1.6:45288->10.96.0.10:53: I/O timeout]

Copy the code

Node1 and node2 are independent demo environments. The /etc/hosts files of the two nodes are modified. The DNS server does not exist on the Intranet. So the names of node1 and node2 are not recognized in the metrics-server. Kubernetes (Kubernetes) {Corefile (Kubernetes) {hostNames (Kubernetes); This allows all pods in the Kubernetes cluster to resolve the names of individual nodes from CoreDNS.

                                                                                                                
     
  1. kubectl edit configmap coredns -n kube -system

  2. apiVersion : v1

  3. data :

  4. Corefile: |

  5. 53 {. :

  6. errors

  7. health

  8. hosts {

  9. 192.168. 61.11 node1

  10. 192.168. 61.12 2

  11. fallthrough

  12.         }

  13. kubernetes cluster .local in -addr .arpa ip6 .arpa {

  14. pods insecure

  15. upstream

  16. fallthrough in- addr. arpa ip6. arpa

  17.         }

  18. prometheus :9153

  19. proxy . /etc /resolv .conf

  20. cache 30

  21. loop

  22. reload

  23. loadbalance

  24.     }

  25. kind : ConfigMap

Copy the code

After the configurations are modified, restart coreDNS and metrics-server in the cluster to check whether the error logs of the metrics-server are no longer displayed. You can run the following command to obtain basic indicator information about cluster nodes:


     
  1. kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"

Copy the code

Unfortunately, metrics- Server is not currently supported in Kubernetes Dashboard. So if metrics-server is used instead of Heapster, there is no way to graphically show Pod memory and CPU performance in the dashboard (which is actually not that important, Currently we are monitoring individual pods in the Kubernetes cluster custom in Prometheus and Grafana, so it is not important to see Pod memory and CPU in the dashboard). There’s a lot of discussion about this on Dashboard’s Github, Such as https://github.com/kubernetes/dashboard/issues/3217 and https://github.com/kubernetes/dashboard/issues/3270, Dashboard is ready to support metrics-Server at some point in the future. However, since metrics-Server and Metrics pipeline are definitely Kubernetes’ future direction in Monitor, we have decisively switched to metrics-Server in each environment.

4. To summarize

Docker images involved in this installation:


     
  1. # kubernetes

  2. K8s. GCR. IO/kube - apiserver: v1.12.0

  3. K8s. GCR. IO/kube - controller - manager: v1.12.0

  4. K8s. GCR. IO/kube - the scheduler: v1.12.0

  5. K8s. GCR. IO/kube - proxy: v1.12.0

  6. K8s. GCR. IO/etcd: 3.2.24

  7. K8s. GCR. IO/pause: 3.1

  8. # network and dns

  9. Quay. IO/coreos/flannel: v0.10.0 - amd64

  10. K8s. GCR. IO/coredns: 1.2.2

  11. # helm and tiller

  12. GCR. IO/kubernetes - helm/tiller: v2.11.0

  13. # nginx ingress

  14. Quay. IO/kubernetes - ingress - controller/nginx - ingress - controller: 0.19.0

  15. K8s. GCR. IO/defaultbackend: 1.4

  16. # dashboard and metric-sever

  17. K8s. GCR. IO/kubernetes - dashboard - amd64: v1.10.0

  18. GCR. IO/google_containers/metrics - server - amd64: v0.3.0

Copy the code

reference

  • Installing kubeadm

  • Using kubeadm to Create a Cluster

  • Get Docker CE for CentOS

  • https://kubernetes.io/docs/setup/independent/install-kubeadm/ 

  • https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ 

  • https://docs.docker.com/engine/installation/linux/docker-ce/centos/

The original

  • Author: Frog white

  • https://blog.frognew.com/2018/10/kubeadm-install-kubernetes-1.12.html