This is the fifth day of my participation in the More Text Challenge. For details, see:More article challenges

The road to Kubernetes operation began with a cluster environment. This article documented the entire process of building Kubernetes on a machine using VirtualBox + Ubuntu 16, including some problems encountered and solutions.

About Kubernetes

Here’s an explanation from Wikipedia about Kubernetes:

Kubernetes (often called K8s for short) is an open source system for automating the deployment, extension, and management of containerized applications. The system was designed by Google and donated to the Cloud Native Computing Foundation (now part of the Linux Foundation) for use.

It is designed to provide “a platform for automated deployment, scaling, and running application containers across host clusters.” It supports a range of container tools, including Docker.

Kubernetes provides service discovery and load balancing, storage orchestration, automatic deployment and rollback, automated packing calculations, self-healing, and key and configuration management.

Basic Environment Preparation

Install VirtualBox

VirtualBox is a powerful virtual machine software, and it is open source, free, here is the download, it is very easy to install VirtualBox, I won’t go into detail here.

Download the Ubuntu 16 OS image

Here I chose Ubuntu 16 as the system image, of course you can also use other systems, such as CentOS, Ubuntu 16 download address.

The virtual machine x3

After installing VirtualBox and downloading the Ubuntu 16 image, we first need to set up three Ubuntu 16 virtual machines. This new virtual machine process is relatively simple, step by step down can be. After the VM is created, configure each VM as user root.

The virtual machine IP

Since we use virtual machines, we will configure network adapters for each virtual machine, so that each virtual machine can access the Internet. There are two ways:

  1. useThe bridge network cardThe IP address of each VM is in the network segment of the host. The VM can access the Internet
  2. useNAT network+ Port forwarding. The network segment can be set by itself to support vm Internet access

You can use either of these methods to configure nics for VMS to access the Internet.

Note that after a cluster is set up, the IP address of each node in the cluster remains the same. Otherwise, the node needs to be added again.

A simple way to do this is to let the VIRTUAL machine go to sleep without shutting it down, and then just wake up next time.

In the cluster, we use the Intranet address. You can find the Intranet address of each VM by using ifconfig or IP addr:

> ifconfig enp0s3 Link encap:Ethernet HWaddr 08:00:27:6f:23:2a inet addr:10.0.2.4 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe6f:232a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3277016 errors:0 dropped:0 overruns:0 frame:0 TX packets:3385793 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX Bytes :1084480916 (1.0 GB) TX bytes:2079122979 (2.0 GB)Copy the code

The address of the virtual machine (master) is 10.0.2.4.

Configuring the Host name

The Kubernetes node name is determined by the host name. Therefore, you can set the host name of three VMS to master, node1, and node2 respectively. To change the host name, you need to restart the VMS by modifying the /etc/hosts file:

# /etc/hosts
10.0.2.4 master
10.0.2.5 node1
10.0.2.6 node2
Copy the code

No secure shell connection

Once the virtual machines are up and running, the first thing we need to do is connect the three virtual machines by configuring SSH non-secret connections.

First generate SSH public and private keys on one of the virtual machines:

ssh-keygen -t rsa -C '[email protected]' -f ~/.ssh/id_rsa -q -N ' '
Copy the code

Parameter description about Ssh-keyGen: SSH /id_rsa Specifies the location where the private key is generated. -q -n “” indicates that no password is added to the private key and the silent mode is used

Distribute the public and private keys to the other two VMS, write the contents of the public keys (~/. SSH /id_rsa.pub) to the ~/. SSH /authorized_keys file on the three VMS, and set the permission of the ~/. SSH /authorized_keys file to 400:

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 400 ~/.ssh/authorized_keys
Copy the code

After the configuration is complete, you can connect one VM to the other vm in the following ways:

# on the master node
ssh root@node1
Copy the code

Kubernetes cluster setup

With three virtual machines in place, we are ready to start building a three-node Cluster of Kubernetes.

Install the Docker

apt-get update -y
apt-get install -y \
  apt-transport-https \
  ca-certificates \
  curl \
  gnupg \
  lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# INSTALL DOCKER ENGINE
apt-get update -y
apt-get install -y docker-ce docker-ce-cli containerd.io

# Configure Docker to start on boot
systemctl enable docker.service
systemctl enable containerd.service

# Start Docker
systemctl start docker
Copy the code

Install Kubeadm, Kubelet, and Kubectl

The image source of Ali Cloud is used here:

Update the APT package index and install the packages needed to use the Kubernetes APT repository
apt-get update -y
apt-get install -y apt-transport-https ca-certificates curl

Download the Google Cloud public signature key
# curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

# Add Kubernetes apt repository
# echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update the apt index, install kubelet, kubeadm and Kubectl, and lock their versions
apt-get update -y
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
Copy the code

Close the SWAP

Edit the /etc/fstab file and comment out swap configuration:

#/dev/mapper/master--vg-swap_1 none            swap    sw              0       0
Copy the code

Pre-download image

Get a list of mirrors to use for kubeadm init:

> kubeadm config images list k8s.gcr. IO /kube-apiserver: v1.21.k8s.gcr. IO /kube-controller-manager:v1.21.1 K8s. GCR. IO/kube - the scheduler: v1.21.1 k8s. GCR. IO/kube - proxy: v1.21.1 k8s. GCR. IO/pause: 3.4.1 track k8s. GCR. IO/etcd: 3.4.13-0 K8s. GCR. IO/coredns/coredns: v1.8.0Copy the code

The image source of K8S is expected but not available for domestic users, but we can first pull to the domestic image warehouse or the image warehouse that can be used, such as Aliyun’s container image service ACR and Docker’s official image warehouse DockerHub.

We can create a GitHub repository with a single Dockerfile that looks like this:

FROM k8s.gcr.io/kube-apiserver:v1.21.0
Copy the code

Then create an image in aliYun container image service ACR, and associate this GitHub code repository, the image built is the K8S image we want, such as k8s.gcr. IO /kube-apiserver:v1.21.1, However, the image needs to be re-labeled when used.

Once you’ve built all the required images in ACR, use this script to quickly handle the task of tagging the images:

# Pull images from aliyun registry
kubeadm config images list | sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-shenzhen.aliyuncs.com/k8scat#g' -e 's#/coredns/coredns#/coredns#g' | sh -x

# Tag images
docker images | grep k8scat | awk '{print "docker tag",$1":"$2,$1":"$2}' | sed -e 's#registry.cn-shenzhen.aliyuncs.com/k8scat#k8s.gcr.io#2'| sh -x docker tag k8s. GCR. IO/coredns: v1.8.0 k8s. GCR. IO/coredns/coredns: v1.8.0# Remove images
docker images | grep k8scat | awk '{print "docker rmi",$1":"$2}' | sh -x
Copy the code

Example Initialize the master node

10.0.2.4 Yes Master IP address, set pod network segment to 192.168.16.0/20:

> kubeadm init --apiserver-advertise-address=10.0.2.4 -- pod-networker-cidr =192.168.16.0/20 kubeadm join 10.0.2.4:6443 - token ioshf8.40 n8i0rjsehpigcl \, discovery - the token - ca - cert - hash sha256:085d36848b2ee8ae9032d27a444795bc0e459f54ba043500d19d2c6fb044b065Copy the code

Adding a node

Kubeadm join 10.0.2.4:6443 --token ioshf8.40n8i0RJSeHPigCl \ --discovery-token-ca-cert-hash sha256:085d36848b2ee8ae9032d27a444795bc0e459f54ba043500d19d2c6fb044b065Copy the code

Distribute the Kubectl configuration file

scp master:/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf
echo 'export KUBECONFIG="/etc/kubernetes/admin.conf"' >> /etc/profile
source /etc/profile
Copy the code

Installing network Plug-ins

Here we use Weave Net:

# curl -L "https://cloud.weave.works/k8s/net? k8s-version=$(kubectl version | base64 | tr -d '\n')" > weave-net.yaml

# With IPALLOC_RANGEkubectl apply -f https://gist.githubusercontent.com/k8scat/c6a1aa5a1bdcb8c220368dd2db69bedf/raw/da1410eea6771c56e93f191df82206be8e722112/ k8s-weave-net.yamlCopy the code