Installation instructions

  • CentOS Linux RELEASE 7.7.1908 (Core)
  • Latest version of Docker-CE
  • kubernetes16.2

Uninstall the original Docker version

Skip it if you haven’t installed it before.

# yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine

By default, docker stores images, containers, storage volumes and networks under /var/lib/docker. Contents in this directory are not deleted when old docker versions are uninstalled.

Install the Docker

The tutorial is installed using docker’s online repository or by manually downloading the RPM package of the specified version.

Install docker’s online repository

  1. Install the required dependency packages

# yum install -y yum-utils device-mapper-persistent-data lvm2

  1. Install a stable warehouse online

# yum-config-manage --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install the Docker Engine Community version

  1. Install the latest docker-engine, docker-CLI,containerd

# yum install -y docker-ce docker-ce-cli containerd.io

  1. Install the specified version of the docker-engine,docker-ce-cli,containerd.io

First through the docker yum list – ce – showduplicates | sort – r list can install stable version, and then choose the specified version installed.

# yum list docker-ce --showduplicates | sort -r

# yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io

  1. Start the container

# systemctl start docker

Install the K8S master node

Using kubeadm installation, some images cannot be downloaded during installation and need to be handled separately.

Master node environment configuration and dependency

  1. Disable swap on all nodes that need to run Kubelet

# swapoff -a

Kubernetes You MUST disable swap in order for the kubelet to work properly. Kubeadm directly reports an error and stops execution, with error messages such as disabling swap. Swapoffff is executed for each restart. TODO has to find a way to automate execution.

  1. Disabling the Firewall

# systemctl disable firewalld && systemctl stop firwalld

  1. Close the SELinux

#setenforce 0

Install kubeadm, kubelet, kubectl on master

Kubeadm can assist in automatic installation of Kubernetes, but kubeadm does not automatically install kubectl and kubelet. Both require manual installation.

  1. Copy the kubeadm, kubelet, kubectl binary into /usr/bin

Server binary is a compiled version of Kubernetes released on Github, including the binary and mirror of each component. Go to the Kubernetes release page and click on a release changelog, such as Changelog-1.10.8.md) to download the Server binary package

  1. Download the kubelet systemd Unit definition file, where the RELEASE variable needs to be exported in advance, such as v1.16.2

    # export RELEASE = v1.16.2

    # curl – sSL “raw.githubusercontent.com/kubernetes/…” > /etc/systemd/system/kubelet.service

  2. Download the kubeadm configuration file

    #mkdir -p /etc/systemd/system/kubelet.service.d`

    # curl – sSL “raw.githubusercontent.com/kubernetes/…” > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

  3. Set kubelet to start automatically upon startup

    #systemctl enable kubelet

Kubernetes control other components below kubelet starts, so as long as the configuration kubelet boot since the launch, kubelet will read the/etc/Kubernetes manifests the static pod configuration directory, Etcd, kube – apiserver, kube controller – manager, kube scheduler4 components don’t need to separate start kubeadm official document installation in the manual installation guide (namely the clear For Linux, CNI plugins and CTRCTL are recommended. But I didn’t install it and didn’t notice the mistake.

Configure docker cgroup driver as kubernetes recommended systemd

Docker uses a cgroupfs driver, but The recommended driver is “systemd”. Configure docker to use systemd as the driver.

cat > /etc/docker/daemon.json <<EOF { “exec-opts”: [“native.cgroupdriver=systemd”], “log-driver”: “json-file”, “log-opts”: { “max-size”: “100m” }, “storage-driver”: “overlay2” } EOF

Restart docker.

#systemctl daemon-reload && systemctl restart docker
Copy the code

After the above steps are completed, Kubelet will restart continuously, it is normal, do not worry, next start installing Kubernetes control plane.

The official documentation explains the kubelet reboot: The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do.

In the above steps, kubelet and other server binary from changlog are installed manually. It’s even easier to use distro installation packages. Kubelet = kubelet = kubelet = kubelet = Kubelet = Kubelet = Kubelet = Kubelet = Kubelet = Kubelet = Kubelet

Install components on the master node

  1. Load control plane mirroring in advance

Running kubeadm without an Internet Connection Kubeadm needs to start the control plane during init, so you need to prepare the image of the corresponding version of the control plane before init.

1. Prepare component images of Apiserver, Controller Manager, Scheduler, and KubeProxy. Load the server binary image into the master node. For example: # docker load -i kube-scheduler.tarCopy the code

For kubernetes 1.16.2, the name of the image to load is available directly, but for Kubernetes 1.10.8, the name of the image to load is changed. Kubelet is not required to load the image into server binary. IO /kube-proxy:v1.10.8 k8s.gcr. IO /kubeproxy-amd64:v1.10.8 k8s.gcr. IO /kubeproxy-amd64:v1.10.8 So you need to retag. The command to execute is as follows: in the repo, I wrote a shell script ldrnk8simgs.sh to load and rename the images

Kube-proxy starts with daemonset, which is not static pod, but can use the same naming conventions.

Kubeadm init --kubernetes-version=v1.16.2 --pod-network-cidr10.244.0.0/16 Kubeadm will not be thereCopy the code

The static pod manifest before download mirror wrote the/etc/kubernetes/manifest /, so need to wait for the previous kubeadm init download mirror perform error, can obtain the properly etcd mirror name: Failed to load K8s.gcr. IO /etcd:3.3.10 Docker image download method: Find the docker. Service file is located (18.09 by default in/lib/systemd/system/docker. Service), In the directory to create directory mkdir/lib/systemd/system/docker. Service. D, create HTTP – proxy. Conf file, content as follows:

[Service] Environment = "HTTP_PROXY = http://127.0.0.1:1080/"Copy the code

Note: the configuration file was deleted after the image was downloaded.

Error: failed to pull image k8s.gcr. IO /pause:3.1 4. Kubeadm init: failed to pull image k8s.gcr. IO/coreDNS :1.3.1Copy the code
  1. Kubeadm init

Kubeadm init – kubernetes – version = v1.16.2 – pod – network – cidr = 10.244.0.0/16

Among them:

--pod-network-cidr=192.168.0.0/16 This value corresponds to the following Calico network scheme, but if flflannel is installed, it should be --pod-network-cidr=10.244.0.0/16Copy the code

If all mirrors are ready, the kubeadm init step takes less than a minute to execute. If the installation process encounters an error and needs to retry, run kubeadm reset before retry.

  1. Configuration kubectl

The following configuration needs to be performed before installing pod network using Kubectl:

- if the follow-up process using the root account to perform, execute # export KUBECONFIG = / etc/kubernetes/admin. Conf can convenient for later wrote < home > /. - if the follow-up process using non-root account under profifile execution, The implementation:  # mkdir -p $HOME/.kube # cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # chown $(id -u):$(id -g) $HOME/.kube/configCopy the code
  1. Installing a POD Network

Here select Calico and follow the official Kubernetes installation document to operate:

# kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yamlCopy the code
  1. Allow POD scheduling to master, otherwise master will be not ready (optional, perform this step if used for single-node cluster)

The kubeadm init process taint the master node by default. The pod can be scheduled to the master node by executing the following command:

#kubectl taint nodes --all node-role.kubernetes.io/master-
Copy the code
  1. At this point, we have a single-node Kubernetes cluster that can run pods. If you need more nodes to join, you can start joining other nodes to the cluster

  2. If you need from the master node remotely operated by kubectl Kubernetes cluster, is the need to turn the kubectl and admin. Conf on a remote machine ready, perform the following command: SCP root @ : / etc/Kubernetes/admin. Conf. To verify that kubectl can connect to apiserver, run: kubectl –kubeconfig./admin.conf get nodes, note kubeconfifig flflag.

Install components on the Node Node

  1. swapoffff -a

  2. Install the docker

  3. Install kubeadm, kubelet, kubectl

    1. Copy the kubeadm, kubelet, kubectl binary into /usr/bin

    The kube-proxy in changelog’s node binary is a binary, not a docker image

    1. Download kubelet systemd service definition file, where the RELEASE variable is the version number, such as v1.10.8

    # export RELEASE = v1.16.2

    # curl – sSL “raw.githubusercontent.com/kubernetes/…” > /etc/systemd/system/kubelet.service

    #kubeadm Join starts kubelet from the systemd unit file

    1. Download the kubeadm configuration file

    After testing this is necessary, otherwise kubeadm join shows success, but kubectl get Nodes cannot get the node that just joined. Seems to be kubelet couldn’t find the bootstrap – kubelet. Conf, and in 10 – kubeadm. Conf will specify the file directory)

    mkdir -p /etc/systemd/system/kubelet.service.d

    The curl – sSL “raw.githubusercontent.com/kubernetes/…” > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

    1. Set kubelet startup automatically and start Kubelet:

    systemctl enable kubelet && systemctl start kubelet

  4. Prepare the mirror

Any images that may be scheduled to Node in the future should be prepared:

1. Kube-proxy schedules nodes with daemonset onset. 2. Calico schedules nodes with daemonset onset onset. Etcd is launched on the master via static Pod and will definitely not be scheduled on nodeCopy the code
  1. kubeadm join

Execute on Node:

kubeadm join –token : –discovery-token-ca-cert-hash sha256:

This command is displayed on console at the end of kubeadm init on master

Among them:

Kubeadm token list 'The token validity period is 24 hours. If all the tokens are invalid, you can run the following command to generate a new token: kubeadm token list Kubeadm token create 2. <master-ip>:<master-port> is the MASTER IP port. The default port 6443 (Kubernetes secure port) 3. < hash > can perform the following commands on the master view openssl x509 pubkey - in/etc/Kubernetes/pki/ca. CRT | openssl rsa -pubin -outform der 2>/dev/null | \Copy the code

openssl dgst -sha256 -hex | sed ‘s/^.* //’

Hash form such as: 8 cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78 executive to join here has been completed, You can execute kubectl get Nodes on the master to see the nodes in the current cluster a few seconds later. The role of the newly added node is None. It does not need to install CNI on the node. The preflflight check displayed during join shows that ebtables is missingCopy the code