Hello everyone, welcome to Xiao CAI Solo School. Here, knowledge is free, not stingy absorption! Pay attention to free, do it! Ghost ~ remember to give me a three – even oh!

This paper mainly introduces the construction of Kubernetes cluster

Refer to it if necessary

If it is helpful, do not forget the Sunday

Wechat public number has been opened, xiao CAI Liang, did not pay attention to the students remember to pay attention to oh!

Reading this article requires a basic understanding of Docker! Docker: I’m good again!

I believe that you should have heard about K8S, and even used it. If you are curious about the title, don’t cross it off, because I urge you to learn Kubernetes (K8S). And titles are not clickbait either, as THE K8S cluster is broadly divided into two categories:

  • One master node with multiple slave nodes: It is easy to set up one master node and multiple nodes. However, a single master node may fail
  • Multi-master multi-slave: Multiple master nodes and multiple Nodes are difficult to set up but secure

No matter it is one master and many slaves or many masters and many slaves, at least three servers are needed here, and the specifications of each server must start with at least 2G memory and 2 CPU configuration. However, some people may not want to spend a sum of money to invest in the server just for daily practice. So here echoes the title ~ next small dish will bring you a more economical plan to learn k8S cluster construction!

I urge you to learn K8S. This is not empty talk. Now, cloud native is not a new term, it has specified a new development path, agile, scalable, replicable, maximize… ** is the pronoun of this road! This article is not only introduced Kubernetes build, if you are familiar with Kubernetes students can jump directly to the Kubernetes cluster build part, if not familiar with the students suggested first look at the first half of the Kubernetes have a look.

Kubernetes

First, K8s prior understanding

Some of you might be a little bit surprised, why do you say kubernetes and K8s? Are they the same thing? The answer is yes

Kubernetes, or K8S for short, is an abbreviation for “ubernete,” using 8 instead of 8 characters

It is designed to manage containerized applications on multiple hosts in the cloud platform. Its purpose is to make the deployment of containerized applications simple and efficient. It provides a mechanism for application deployment, planning, updating, and maintenance.

Let’s start with the iterative process of deploying the application:

  • Traditional deployment: Deploy the application directly on a physical machine

  • Virtualization deployment: Multiple VMS can run on a physical machine, and each VM is an independent environment

  • Containerized deployment: Similar to VMS, but with a shared operating system

When it comes to container deployment, those who have learned Docker must think of Docker at the first time. Container deployment of Docker does bring us a lot of convenience, but there are also problems, sometimes these problems are deliberately avoided by us. Because Docker is really a little easy to use, it makes people a little hard to question it, but we have to face:

  • If one container fails and stops, how to ensure high availability that another container starts immediately to replace the stopped container

  • When the concurrent traffic up, whether can achieve automatic expansion, concurrent traffic down whether can achieve automatic capacity reduction

  • .

These container management problems are collectively referred to as container choreography problems. The problems we can think of are naturally solved by someone. For example, Docker launched docker Swarm container choreography tool, and Apache withdrew from Mesos resource unified control tool. Google has released the Kubernetes container orchestration tool, and that’s what we’re talking about!

1) Advantages of K8s
  • Self-healing: Once a container crashes, can quickly start a new container in about 1 second
  • Elastic scaling: The number of running containers in a cluster can be automatically adjusted as required
  • Service discovery: A service can find the services it depends on through automatic discovery
  • Load balancing: If a service starts multiple containers, the load balancing of requests can be implemented automatically
  • Version rollback: If problems are found with a newly released program version, you can immediately roll back to the original version
  • Storage orchestration: You can automatically create storage volumes based on container requirements
2) COMPONENTS of K8s

A complete Kubernetes cluster is composed of master controller node and node worker node. Therefore, this cluster mode can also be divided into one master and multiple slave and multiple master and multiple slave, and different components will be installed on each node to provide services.

1.Master

The control plane of the cluster, responsible for decision-making (management) of the cluster. It has the following components under its umbrella

  • ApiServer: a unique entry for resource operations. It receives user input commands and provides authentication, authorization, Api registration, and discovery mechanisms
  • Scheduler: Is responsible for scheduling cluster resources and scheduling PODS to corresponding nodes based on scheduled scheduling policies
  • ControllerManager: Maintains cluster status, such as application deployment scheduling, fault detection, automatic scaling, rolling updates, etc
  • Etcd: provides information about resource objects in a storage cluster
2, the Node

The data plane of the cluster, which provides the environment for the container to run (work). It has the following components under its umbrella

  • Kubelet: Is responsible for maintaining the container lifecycle, i.e. creating, updating, and destroying containers by controlling Docker
  • KubeProxy: provides service discovery and load balancing within the cluster

After reading the above introduction, we will start to build the K8S cluster next!

Ii. K8s cluster construction

1) Centos7 installation

First we need software:

  • VMware Workstation Pro
  • Centos 7.0 mirror

Virtual machine software can be baidu search download, if there is no contact xiao CAI, xiao CAI to provide you

The image can be downloaded by visiting Ali Cloud, as shown below: Download address

After the virtual machine is installed, we can install Centos7 in VMware

  • We choose to create a new virtual machine

  • Select custom installation

Typical installation: VMware will apply the mainstream configuration to the vm operating system, which is very friendly for beginners.

Custom installation: Custom installation can strengthen some resources and remove unnecessary resources. Avoid wasting resources.

  • Compatibility Generally backward compatibility

  • Select the centos image we downloaded

  • Assign the name and installation address to your own VIRTUAL machines. We need to install three of them, so I named them as (master, node01, and node02).

  • The minimum requirement for allocating resources to a VM is 2 cores and 2 GB memory

  • The NAT network type is used here

Bridge: If the bridge mode is selected, the VM and host are connected to the same switch on the network.

NAT: In NAT mode, a VM can communicate with the outside world only after it connects to the host.

Host-only: The VM is directly connected to the host

  • Go to the next step and click Finish

  • After the installation, we can see the following results on the page. Click to start the VIRTUAL machine:

  • Select install CentOS7

  • Then you can see the installation process:

  • In a moment, you’ll see a screen that lets you select a language. Here, select Chinese and continue

  • We have a choice of softwareInfrastructure server, optional installation positionAutomatic partition

  • Then we need to click on network and host name to go to network configuration

  • In the Tarbar, click Edit -> Virtual Network Editor to view the subnet IP of the virtual machine

  • Here, we add Ipv4 address manually, DNS server can fill ali cloud

We need to exclude 255 and 02, which are broadcast and gateway addresses respectively.

Here’s how I configured it:

Master node: 192.168.108.100

Node01 node: 192.168.108.101

Node02 node: 192.168.108.102

  • After the configuration, select Save and click Finish, then set the host name

Here’s how I configured it:

Master node: master

Node01 Node: Node01

Node02 Node: node02

  • The configuration is as follows

  • After clicking Start installation, we are taken to the following page and then configure the following two information

After the above configuration is complete, restart the server and the other two nodes have the same configuration, you can directly select clone, network configuration and host name remember to change ~ then we have three servers with the following configuration:

The host name IP configuration
master 192.168.108.100 2 cores 2 GB memory 30 GB hard disk
node01 192.168.108.101 2 cores 2 GB memory 30 GB hard disk
node02 192.168.108.102 2 cores 2 GB memory 30 GB hard disk
2) Environment configuration

After completing the above server construction, we can use shell tools to connect and start to build the K8S environment

  • Host name resolution

To make direct calls between cluster nodes, we need to configure hostname resolution and edit /etc/hosts on each of the three servers

  • Synchronization time

The time in the cluster must be accurate and consistent. We can directly use chronyd service to synchronize time from the network, and three servers need to do the same operation

  • The iptables and Firewalld services are disabled

Kubernetes and Docker generate a large number of iptables rules during operation. In order not to confuse the system rules with them, directly turn off the system rules. Perform the same operation for the three VMS:

#1 Disable the Firewalld service
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
#2 Disable the iptables service
[root@master ~]# systemctl stop iptables
[root@master ~]# systemctl disable iptables
Copy the code
  • Disable selinux

Selinux is a security service on Linux that can cause all kinds of weird problems in the installation cluster if it is not turned off

#Permanent ban
[root@master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
#temporarily closed
[root@master ~]# setenforce 0
Copy the code
  • Disabling swap Partitions

A swap partition is a virtual memory partition. After the physical memory is used up, the disk space can be used as memory. Enabling swap has a negative impact on system performance. Therefore, Kubernetes requires that each node disable swap, but if for some reason you really cannot disable swap, you need to specify these parameters during cluster installation

#temporarily closed
[root@master ~]# swapoff -a
#Permanent ban
[root@master ~]# vim /etc/fstab
Copy the code

Comment out the swap line

  • Modify Linux kernel parameters

We need to modify the Linux kernel parameters, adding bridge filtering and address forward function, edit/etc/sysctl. D/kubernetes. Conf file, add the following configuration:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
Copy the code

After adding, do the following:

#Reload configuration
[root@master ~]# sysctl -p
#Load the bridge filter module
[root@master ~]# modprobe br_netfilter
#Check whether the bridge filtering module is loaded successfully
[root@master ~]# lsmod | grep br_netfilter
Copy the code

If the operation is performed on all three servers, the success information is as follows:

  • Configure the IPVS function

In Kubernetes there are two proxy models for services, one based on iptables and one based on IPVs. The performance of IPVS is significantly higher than that of kubernetes, but to use it you need to manually load the IPVS module

#Install ipset and IPVsadm
[root@master ~]# yum install ipset ipvsadmin -y

#Add modules that need to be loaded to write script files
[root@master ~]# cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
#Add execute permission to the script file
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
#Execute script file
[root@master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
#Check whether the corresponding module is successfully loaded
[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
Copy the code

  • Restart the server
[root@master ~]# reboot
Copy the code
3) Docker installation

The first step:

#Obtaining the Mirror Source
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
Copy the code

The second step:

#Install a specific version of Docker-CE
#Must specify --setopt= Obsoletes =0, otherwise Yum will automatically install a higher version[root@master ~]# yum install -- Obsoletes =0 Docker-ce-18.06.3. Ce-3.el7-yCopy the code

Step 3:

#Add a configuration file
#The default Cgroup Driver Docker uses is cgroupfs, while Kubernetes recommends using systemd instead of cgroupfs
[root@master ~]# mkdir /etc/docker
Copy the code

Step 4:

#Add Ali Cloud YUM source to copy the image acceleration address from Ali cloud container mirror management
[root@master ~]# cat <<EOF > /etc/docker/daemon.json
{
"registry-mirrors": ["https://xxxx.mirror.aliyuncs.com"]
}
EOF
Copy the code

Step 5:

#Start the docker
[root@master ~]# systemctl enable docker && systemctl start docker
Copy the code

After completing the above 5 steps, the installation of Docker will be completed, and it will be one step closer to success

4) Cluster initialization

1. Because kubernetes’ mirror source is abroad and the speed is relatively slow, we need to switch to the domestic mirror source

#Edit/etc/yum. Repos. D/kubernetes. Repo to add configuration
[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Copy the code

Install kubeadm, kubelet and kubectl

Yum install -- Setopt = Obsoletes =0 Kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0-yCopy the code

3, configure kubelet group

#Edit /etc/sysconfig/kubelet and add the following configuration
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
Copy the code

4, This step is to initialize the cluster, so it only needs to be performed on the master server. The above steps need to be performed on each server!

#Create the cluster
#Because the default pull image address k8s.gcr. IO cannot be accessed in China, the address of ali Cloud image warehouse is specified here[root@master ~]# kubeadm init \ --apiserver-advertise-address=192.168.108.100 \ --image-repository Registry.aliyuncs.com/google_containers \ - kubernetes - version = v1.17.4 \ - pod - network - cidr = 10.244.0.0/16 \ - service - cidr = 10.96.0.0/12
#Use the Kubectl tool
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Then we need to add the node nodes to the cluster and execute the command in the red box above on the Node server:

[root@master ~]# kubeadm join 192.168.108.100:6443 --token XXX \ --discovery-token-ca-cert-hash sha256: XXXCopy the code

Then you can get the node information from the master node:

However, NotReady is used to check the cluster status because the network plug-in has not been configured

5. Install the network plug-in

Kubernetes supports a variety of networking plug-ins, such as Flannel, Calico, Canal, etc. Flanne is chosen here

Flanneld-v0.13.0-amd64. docker

After the download is complete, upload it to the master server and run the following command

Docker load < flanneld - v0.13.0 - amd64. DockerCopy the code

After executing the flannel image, you can see the flannel image:

Then we need to obtain the Flannel configuration file to deploy the Flannel service

[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#Start Fannel using the configuration file
[root@master ~]# kubectl apply -f kube-flannel.yml

#Check the cluster node status again
[root@master ~]# kubectl get nodes
Copy the code

At this time, all nodes are in the Ready state, so far, our K8S cluster can be built!

5) Cluster function verification

Next is our verification time, before we learn docker often start an Nginx container to test whether it is available, k8S we also deploy an Nginx to test whether the service is available ~

(The following example is a test example, if you are not clear about the function of each instruction, it does not matter, we will post a K8S teaching article to explain how to use K8S!)

  • First we create a Deployment
[root@master ~]# kubectl create deployment nginx --image=nginx:1.14-alpine
deployment.apps/nginx created

[root@master ~]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           31s
Copy the code
  • Then create a service to allow the outside world to access our nginx service
[root@master ~]# kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed

[root@master ~]# kubectl get svc 
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx        NodePort    10.110.224.214   <none>        80:31771/TCP   5s
Copy the code

Then we access our nginx service using the node IP and the nodePort exposed by the service:

We can also access our service directly from the cluster using the service IP and the mapped port:

The nginx service has been successfully deployed. The nginx service has been successfully deployed

Public account search: xiao CAI Liang ji

More dry goods worth reading!

So why can we access nginx? We might as well combine the above mentioned K8S components to comb out the call relationship of each component:

  1. After kubernetes is started, both the master node and the Node store their information in the ETCD database

  2. To create an Nginx service, the installation request is first sent to the apiServer component on the master node

  3. The apiServer component calls the Scheduler component to determine on which node the service should be installed. In this case, the ETCD database is used. The Scheduler reads information about each node from the ETCD, selects it according to a certain algorithm, and reports the results to apiServer

  4. ApiServer calls controllerManager to schedule nodes and install the Nginx service

  5. When the Kubelet component on the node receives the instruction, it notifies docker, which then launches an Nginx pod

    Pod is the smallest operation unit in Kubernetes. Containers run in PODS

  6. Once the above steps are complete, the nginx service is up and running. If you need to access Nginx, you need to use kube-proxy to generate a proxy for pod access, so that external users can access the nginx service

Above is the whole process of running a service, do not know after watching there is no a sense of awe, the design is too clever, so here, do not prepare to see k8S use below! If you are ready to watch, the hands will focus on oh!

END

Above is the k8S cluster construction process, with the K8S environment, you are afraid to learn the use of K8S! In their own virtual machine to do STH over and over, broke also a recovery snapshot thing ~ I am a small dish, the road is long, with you to seek together!

Today you work harder, tomorrow you will be able to say less words!

I am xiao CAI, a man who studies with you. 💋

Wechat public number has been opened, xiao CAI Liang, did not pay attention to the students remember to pay attention to oh!