Author: Shi Yifeng

Source: Hang Seng LIGHT Cloud Community

K8s installation configuration run

Installation Environment

The K8S system is composed of a group of executable programs. The compiled binary installation package can be downloaded from the K8S project page on GitHub, or the source code can be downloaded directly and installed after compilation.

The installation environment can be referred to the K8S official website. Suggestions are as follows:

  • A compatible Linux host. The Kubernetes project provides common instructions for Debian and Red Hat-based Linux distributions as well as some distributions that do not provide a package manager
  • 2 GB or more RAM per machine (less than this will affect the running memory of your application)
  • 2 CPU cores or more
  • All the machines in the cluster can connect to each other (both the public and internal networks can be connected).
  • The node cannot have duplicate host names, MAC addresses, or product_uUID. See here for more details.
  • Open certain ports on the machine. See here for more details.
  • Disable swap partitions. To keep Kubelet working, you must disable swap partitions.
The hardware and software The recommended configuration
The CPU/memory Master: At least 2C2G

Node: The value is adjusted based on the number of containers that need to be run
Linux Operating system Redhat 7+

Centos 7+
K8S 1.18 +

Download Address and Description:Github.com/kubernetes/…
docker 1.13 +

Download Address and Description:www.docker.com
etcd 3 +

Download Address and Description:Github.com/coreos/etcd…

Version deviation policy

kube-apiserver

In an HA cluster, multiple kube-Apiserver instance minor versions differ by at most 1.

kubelet

The version number of kubelet cannot be higher than kube-Apiserver, and can be up to two versions lower than kube-Apiserver.

Note: If multiple kube-Apiserver instances in the HA cluster do not have the same version number, the corresponding kubelet version number optional range should also be reduced.

Kube-controller-manager, kube-Scheduler and Cloud-controller-Manager

The version of kuBE-Controller-Manager, KuBE-Scheduler and Cloud-Controller-Manager cannot be later than the version of Kube-Apiserver. Ideally, they should have the same version number as Kube-Apiserver, but allow a smaller version than Kube-Apiserver (to support online upgrades).

Description: If multiple Kube-Apiserver instances in the HA cluster have different version numbers, they can also communicate with any kube-Apiserver instance (for example, by load Balancer), However, the available range of kuBE-Controller-Manager, KuBE-Scheduler and Cloud-Controller-Manager versions will be reduced accordingly.

kubectl

Kubectl can be a small version higher or lower than Kube-Apiserver.

Note: If multiple kube-Apiserver instances in the HA cluster have different version numbers, the corresponding kubectl version range is reduced.

K8s installation

Ensure that the MAC address and product_uUID are unique on each node

  • You can use commandsip linkifconfig -aTo obtain the MAC address of the network interface
  • You can usesudo cat /sys/class/dmi/id/product_uuidThe product_uuid command verifies the product_uuid

Generally, hardware devices have unique addresses, but some virtual machines may have duplicate addresses. Kubernetes uses these values to uniquely identify the nodes in the cluster. If these values are not unique on each node, the installation may fail.

Checking network Adapters

If you have more than one network adapter and your Kubernetes components are unreachable via default routing, we recommend that you pre-add IP routing rules so that the Kubernetes cluster can connect via the corresponding adapter.

Allows iptables to check bridge traffic

Ensure that the BR_netfilter module is loaded. This operation can run through lsmod | grep br_netfilter to complete. To explicitly load the module, run sudo modprobe br_netfilter.

In order for iptables on your Linux node to view bridge traffic correctly, you need to make sure that net.bridge. bridge-nF-call-iptables is set to 1 in your SYsctl configuration. Such as:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Copy the code

Checking required ports

The control node

agreement The direction of Port range role The user
TCP The inbound 6443 Kubernetes API server All of the components
TCP The inbound 2379-2380. Etcd server client API kube-apiserver, etcd
TCP The inbound 10250 Kubelet API Kubelet itself, control plane components
TCP The inbound 10251 kube-scheduler Kube – the scheduler itself
TCP The inbound 10252 kube-controller-manager Kube – controller – manager itself

Work node

agreement The direction of Port range role The user
TCP The inbound 10250 Kubelet API Kubelet itself, control plane components
TCP The inbound 30000-32767. NodePort service † All of the components

The preceding is the default port range of the node. Ensure that the customized ports are open.

Although the control node already contains ports for ETCD, custom external ETCD clusters can also be used, or custom ports can be specified.

The Pod networking plug-in you use may also require certain ports to be turned on. Because each Pod networking plug-in is different, see the port requirements in the respective documentation.

Installation Runtime: containerd/docker

Install containerd

Prerequisites for installation and configuration:

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

#Set the necessary SYSCTL parameters that will persist after the restart.cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1  net.bridge.bridge-nf-call-ip6tables = 1 EOF
#Apply sySCTL parameters without restarting
sudo sysctl --system
Copy the code

Installation:

  1. Install containerd. IO from the official Docker repository. Instructions for setting up the Docker repository and installing the Containerd. IO package for their respective Linux distributions can be found in installing the Docker engine.

  2. Configuration containerd:

    sudo mkdir -p /etc/containerd
    containerd config default | sudo tee /etc/containerd/config.toml
    Copy the code
  3. Restart Containerd:

    sudo systemctl restart containerd
    Copy the code

Install the Docker

  1. On each node, install Docker for your Linux distribution based on installing the Docker engine. Latest verified Docker version dependencies: github.com/kubernetes/…

  2. Configure Docker daemons, especially systemd, to manage container cgroups.

    sudo mkdir /etc/docker
    cat <<EOF | sudo tee /etc/docker/daemon.json
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    Copy the code

    Note: Overlay2 is the preferred storage driver for systems running Linux kernel version 4.0 or later, or using RHEL or CentOS versions 3.10.0-51 or later.

  3. Restart Docker and enable it on startup:

    sudo systemctl enable docker
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    Copy the code

Install kubeadm, kubelet, kubectl

  • kubeadm: the instruction used to initialize the cluster.
  • kubelet: used to start pods, containers, etc., on each node in the cluster.
  • kubectl: command line tool used to communicate with the cluster.

# get

Cat < < EOF > / etc/yum repos. D/kubernetes. ‘[kubernetes] name = kubernetes baseurl=mirrors.aliyun.com/kubernetes/… Enabled = 1 gpgcheck = 1 repo_gpgcheck = 1 gpgkey=mirrors.aliyun.com/kubernetes/… Mirrors.aliyun.com/kubernetes/… EOF

# yum install kubelet kubeadm kubectl

Install the K8S cluster

Example Initialize the controller node

The control plane node is the machine that runs the control plane components, including ETCD (clustered database) and API Server (with which the command line tool Kubectl communicates).

  1. (Recommended) You should specify if you plan to upgrade a single control plane Kubeadm cluster to high availability--control-plane-endpointSet up shared endpoints for all control plane nodes. The endpoint can be the DNS name or IP address of the load balancer.
  2. Select a Pod networking plug-in and verify that thekubeadm initPass parameters. Depending on which third-party networking plugin you choose, you may need to set it up--pod-network-cidrThe value of the.

To initialize the control plane node, run:

kubeadm init <args>
Copy the code

Example reference:

Daocloud /daocloud –kubernetes-version=v1.17.4 –pod-network-cidr=10.252.0.0/16 –upload-certs

Configuration kubectl

To enable non-root users to run kubectl, run the following commands, which are also part of the kubeadm init output:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Or, if you are root, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf
Copy the code

Join node

Nodes are where your workloads (containers, pods, etc.) run. To add a new node to the cluster, do the following for each computer:

  • SSH into the machine
  • Become root (for examplesudo su -)
  • runkubeadm initCommand output. Such as:
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
Copy the code

If no token is available, you can obtain the token by running the following command on the control plane node:

kubeadm token list
Copy the code

The output looks like the following:

TOKEN                    TTL  EXPIRES              USAGES           DESCRIPTION            EXTRA GROUPS
8ewj1p.9r9hcjoqgajrj4gi  23h  2018-06-12T02:51:28Z authentication,  The default bootstrap  system:
                                                   signing          token generated by     bootstrappers:
                                                                    'kubeadm init'.        kubeadm:
                                                                                           default-node-token
Copy the code

By default, tokens expire after 24 hours. If you want to add the node to the cluster after the current token expires, you can create a new token by running the following command on the control plane node:

kubeadm token create
Copy the code

The output looks like the following:

5didvk.d09sbcov8ph2amjw
Copy the code

If you don’t have a –discovery-token-ca-cert-hash value, you can get it by executing the following command chain on the control plane node:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'
Copy the code

The output looks like the following:

8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
Copy the code

Install the CNI

Calico is used here:

Kubectl apply – docs.projectcalico.org/v3.14/manif f…

Run the first Hello World

Kubectl run it –rm run it –image= image –restart=Never — echo “hello world”