If you want to deploy large applications on Docker, the first thing is to solve the network problem, and there are a series of complex problems, including cluster management, load balancing, etc. Container Choreography deployment tools are needed to address these issues.

Container Choreography deployment tool

Container management tools can be the basis of the complete container management, but the container is not only simple application deployment, you can use the container to complete the enterprise deployment, the application of more complex when the need for many application systems for deployment, you need more sophisticated tools to complete the container run the choreography, this is the container arrangement deployment tool.

Container Choreographer deployment tools are:

(1) Docker three swordsmen: Docker machine, Docker compose and Docker swarm

(2) the mesos + marathon

(3) the kubernetes

Introduction to Kubernetes

Kubernetes, called K8S for short because of the eight letters between K and S, is the open source version of Borg system within Google and the mainstream container choreography deployment tool at present.

K8s is a portable and extensible open source platform used to manage containerized applications on multiple hosts. It makes the deployment of containerized applications simple and efficient, and provides a mechanism for application deployment, planning, updating, and maintenance. It can also automate the deployment and scaling of applications.

As shown in the figure above, THE K8S cluster is the Master and Worker mode.

Kube-apiserver, Kube-Controller-Mansger, kube-Scheduler, etcd, etc. They are used to receive client requests, control the cluster and resource object control center, and monitor container health, resource scheduling, and resource object data storage.

There are kubelet, Kube-proxy and Docker on Worker node, which are used to manage Pod and Pod container respectively and report node resource information to Master regularly, realize transparent proxy and load balancing of Service, and run container.

Iii. Main functions of K8S

(1) Automatic packing

Automatically deploy an application container based on the resource configuration requirements of the container on the application running environment

(2) Self-repair

When the container fails, it is restarted

When deployed nodes go down, containers are redeployed and rescheduled

If a container fails to pass the monitoring check, the container is shut down. Services are not provided until the container runs properly.

(3) Horizontal expansion

You can expand or reduce the capacity of an application container by using simple commands, user interface (UI), or based on the CPU usage

(4) Service discovery

There is no need to use additional service discovery mechanisms, k8S has its own service discovery and load balancing

(5) Rolling update

You can update the applications running in the application container at a time or in batches based on the application changes

(6) Version rollback

You can roll back historical versions of applications running in the application container based on the application deployment

(7) Key and configuration management

Without the need to rebuild the image, you can deploy and update keys and application configurations, similar to hot deployment

(8) Storage orchestration

Automatic storage system mounting and application, especially for stateful applications to implement data persistence

The storage system can be a local directory, network storage, or public cloud storage service.

Four, quick installation

A, preparation,

1. Disable SELINUX so that the container can read the host file system. Restart takes effect

Vim /etc/selinux/config Change selinux =enforcing to selinux =disabledCopy the code

Note: All machines are subject to change

2. Close the swap partition

Kubernetes cluster deployment must disable the swap partition, otherwise an error will be reported, restart the effect (to do this, I crashed two VMS)

vim /etc/fstab
Copy the code

Comment out the line with swap

Run the free -m command to check whether the server is disabled. 0 indicates that the server is disabled

Note: All machines are subject to change

3. Add bridge filtering

(1) Add bridge filtering and address forwarding

vim /etc/sysctl.d/k8s.conf

net.bridge.bridge-ng-call-ip6tables = 1
net.bridge.bridge-ng-call-iptables = 1
net.ipv4.ip_forword = 1
Copy the code

(2) Run the following command to load the BR_netfilter module

modprobe br_netfilter
Copy the code

(3) Run the following command to check whether the file is loaded

lsmod | grep br_netfilte
Copy the code

(4) Use the following command to load the bridge filtering file

sysctl -p /etc/sysctl.d/k8s.conf
Copy the code

Note: All machines are subject to change

4. Enable IPVS

(1) Install ipset and ipvsadm

yum -y install ipset ipvsadm
Copy the code

(2) Add modules to be loaded

vim /etc/sysconfig/modules/ipvs.modules
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
Copy the code

(3) Run the above script

sh /etc/sysconfig/modules/ipvs.modules
Copy the code

Check whether it is loaded:

lsmod | grep ip_vs
Copy the code

Note: All machines are subject to change

5. Install Docker

— > docker installation

After installation, you need to change the Docker configuration file

vim /etc/docker/daemon.json
#Add the following line configuration
{  "exec-opts":["native.cgroupdriver=systemd"]}
Copy the code

Note: All machines are subject to change

Install the K8S cluster

1. Components need to be installed

Kubeadm kubelet kubectl Initialize cluster, manage cluster, etc. Used to receive API-server instructions, manage cluster command line management tool for POD life cycle

2. Set the Aliyun YUM source

vim /etc/yum.repos.d/k8s.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Copy the code

Check out the latest version of Kubeadm

yum list | grep kubeadm
Copy the code

Note: All machines are subject to change

3. Install components

The installed version is 1.16.0. Change the version as required

Yum -y install kubeadm-1.16.0-0 kubelet-1.16.0-0 kubectl-1.16.0-0Copy the code

Note: All machines must be installed

4. Configure Kubelet

vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
Copy the code

Configure automatic startup after startup:

systemctl enable kubelet
Copy the code

Note: All machines are subject to change

5. View the image to be downloaded

The image required on the Master machine can be viewed with the following command:

kubeadm config images list
Copy the code

You can download it using the following script:

#! /bin/bashImg_list = 'gotok8s/kube - apiserver: v1.16.0 gotok8s/kube - controller - manager: v1.16.0 gotok8s/kube - the scheduler: v1.16.0 Gotok8s /kube-proxy:v1.16.0 GOtok8S/Pause :3.1 GoTok8s/ETCD :3.3.15-0 Gotok8S/CoreDNS :1.6.2 '
#Pull the mirror 
for img in ${img_list}
do
       docker pull $img
done

#Use the Docker tag to re-mark
for img in ${img_list}
do
       docker tag $img k8s.gcr.io${img:7}
done

#Delete unnecessary mirrors
for img in ${img_list}
do
        docker rmi $img
done
Copy the code

Because k8S.gcr. IO warehouse needs scientific Internet to download, so GOtoK8S warehouse is used, and then re-marking is carried out

The following image is required on the Worker node

K8s. GCR. IO/kube - proxy: v1.16.0 k8s. GCR. IO/pause: 3.1Copy the code

Docker image import/export command

#exportDocker save-o kube-proxy.tar k8s.gcr. IO /kube-proxy:v1.16.0#The import
docker load -i kube-proxy.tar
Copy the code

6. Initialize the cluster

Run the following command on the master node, including specifying the kubernetes version and the current host IP address

Kubeadm init - kubernetes - version = v1.16.0 - apiserver - advertise - address = 192.168.197.100Copy the code

Cluster initialization and certificate creation are performed.

If the following information is displayed, the initialization is successful, including copying configuration files and adding nodes to the cluster.

Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one  of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 192.168.197.1001:6443 --token cmx34b.juizw6tp9ptlgg9i \ --discovery-token-ca-cert-hash sha256:77661093886eb76ffa7595e200a4ce2a5b20f02c164f4946956dff16d941a1e7Copy the code

(1) create “.kube” folder in root directory

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

(2) Run the following command to install weave

kubectl apply -f "https://cloud.weave.works/k8s/net? k8s-version=$(kubectl version | base64 | tr -d '\n')"Copy the code

(3) Add worker nodes to the cluster

Run the following command on the workder node

Kubeadm join 192.168.197.1001:6443 --token cmx34b.juizw6tp9ptlgg9i \ --discovery-token-ca-cert-hash sha256:77661093886eb76ffa7595e200a4ce2a5b20f02c164f4946956dff16d941a1e7Copy the code

7. Verify that the cluster is available

#Obtaining a Cluster Node
kubectl get nodes
Copy the code

#View the cluster health status
kubectl cluster-info
Copy the code

There is also a way to install: binary file installation K8S cluster, more trouble, free to see how to do.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

I’m Liusy, a programmer who likes to work out.

Welcome to pay attention to wechat public number [Liusy01], exchange Java technology and fitness, get more dry goods, get Java advanced dry goods, become a Java god together.

Here we go. Keep an eye on it.