This article was first published in the wechat public account “Beauty of Operation and Maintenance”, the public account ID: Hi-Linux.

“Beauty of Operation and maintenance” is a feeling, attitude, dedicated to Linux operation and maintenance related technical articles to share the public account. The public account is dedicated to sharing all kinds of technical articles and publishing the most cutting-edge scientific and technological information for the majority of operation and maintenance workers. The core concept of the official account is: sharing, we believe that only sharing can make our group stronger. If you want to be the first to get the latest technical articles, please follow us!

Mike, the author of the public account, earns 3000 yuan a month as a handyman. Engaged in IT related work for 15+ years, keen on Internet technology field, identify with open source culture, have their own unique insights on operation and maintenance related technology. I am willing to share my accumulated experience, experience and skills with you. Don’t miss the dry goods. If you want to contact me, you can follow the public account for relevant information.

In our article “Using Kind to quickly deploy a Kubernetes HIGH Availability Cluster in 5 Minutes”, we described how to use Kind to quickly deploy a Kubernetes high availability cluster out of the box. I believe that many students with this artifact greatly reduced the deployment difficulty of Kubernetes cluster and improve the deployment speed of Kubernetes cluster. Unfortunately, Kind currently only supports quick build of a development or test environment locally, and does not currently support deployment of Kubernetes high availability clusters in production environments.

Today, we are going to introduce Sealos, another great tool for deploying Kubernetes high availability clusters in a production environment.

What is Sealos?

Sealos is a simple, clean and lightweight Kubernetes cluster deployment tool developed in Go language. Sealos supports the deployment of highly available Kubernetes clusters in production environments.

Sealos architecture diagram

Sealos features and Benefits

  1. Offline installation is supported, and tools and deployment resource packages are separated, facilitating rapid upgrade between different versions.
  2. The certificate validity period is extended to 99 years by default.
  3. The tool is very simple to use.
  4. Supports customized configuration files to flexibly customize cluster environments.
  5. Using the kernel for local load is extremely stable and troubleshooting is extremely simple.

Sealos design principles and working principles

1. Why not use Ansilbe implementation

Sealos 1.0 was implemented using Ansible, which required installation of Ansible and some Python dependencies and necessary environment configurations.

To address this issue, new versions of Sealos are currently available as binaries. The new version Sealos has no dependencies, right out of the box.

File distribution and remote commands are implemented by calling the corresponding SDK, independent of any other environment.

2. Why not use KeepAlived and HAProxy to implement cluster high availability

High availability cluster scheduling via KeepAlived or HAProxy has the following disadvantages.

  1. Inconsistent software sources may cause software versions installed in containers to be inconsistent, which may cause faults such as invalid check scripts.
  2. The installation may not be complete under certain circumstances because of system-dependent library issues.
  3. Rely only on detectionHAProxyWhether the process is alive or not cannot guarantee the high availability of the cluster. The correct detection method is to determineApiServerWhether or nothealthzState.
  4. KeepalivedThere may beCpuThe condition of being occupied.

3. Why don’t local loads be implemented using Envoy or Nginx

The Sealos high availability implementation is done locally loaded. Local loads are implemented in various ways, such as IPVS, Envoy, Nginx, and Sealos uses kernel IPVS.

Local load: A load balancer is enabled on each Node, listening on multiple Master nodes in the cluster.

Sealos chose to be implemented through kernel IPVS for several reasons:

  • If you are usingEnvoyEtc need to run a process on each node, consuming more resources. althoughIPVSAnd actually one morelvscareProcess, butlvscareJust in chargeIPVSRules, principles andKube-ProxySimilar. Real traffic flows directly from the kernel level, without the need for packets to be processed in user mode first.
  • useEnvoyThere are startup priority issues, such as Kubelet startup failure if load balancing is not established when joining a cluster. useIPVSThere is no such problem, because we can establish forwarding rules before joining the cluster.

3.1 Working Principles of local kernel load

Sealos implements per-node load balancing access to all Master nodes through local kernel load, as shown in the figure below.

+----------+ +---------------+ virturl server: 127.0.0.1:6443 | mater0 | < -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - | ipvs nodes | real servers: + -- -- -- -- -- -- -- -- -- -- + | + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + 10.103.97.200:6443 | 10.103.97.201:6443 + -- -- -- -- -- -- -- -- -- -- + | 10.103.97.202:6443 | mater1 |<---------------------+ +----------+ | | +----------+ | | mater2 |<---------------------+ +----------+Copy the code

IPVS is daemon by starting a Static Pod containing lvSCARE on all nodes. If it detects that ApiServer is unavailable, Sealos will automatically clean up the corresponding IPVS forwarding rules for primary nodes on all Node nodes. A rule is automatically generated until the Master node recovers. To do this, we added the following to the Node Node.

# added a lvscare Static Pod $# cat/etc/kubernetes/manifests automatically created some of the rules of the IPVS $ipvsadm - Ln # added to the virtual IP address resolution $cat /etc/hostsCopy the code

4. Why custom Kubeadm

  • The default certificate validity period is only one year.
  • More convenient implementation of local load.
  • While the core functions are integrated into Kubeadm, Sealos is relatively lightweight for distributing and executing upper-level commands.

5. Sealos implementation process

  1. throughSFTPorWgetCommand to copy the offline installation package to the target machine, including allMasterNodeNode.
  2. inMaster 0Execute on nodekubeadm initCommand.
  3. On the otherMasterExecute on nodekubeadm joinCommand to set the control surface. This process is multipleMasternodesEtcdIt will automatically form oneEtcdCluster and start the corresponding control components.
  4. allNodeThe nodes are added to the clusterNodeRun on nodeIPVSForwarding rules and/etc/hostsConfiguration.

A Node accesses the ApiServer using a domain name. Because Node nodes need to connect to multiple masters through virtual IP addresses, each Node’s Kubelet and Kube-proxy access to ApiServer addresses are different. So domain names are used to resolve the different IP addresses of ApiServer on each node.

Deploy the highly available Kubernetes cluster using Sealos

1. Install environment dependencies

For Kubernetes cluster deployment with Sealos, you need to have the following environment ready.

  1. On all machines to be deployed, complete firstDockerInstallation and startup of.
  2. downloadKubernetesOffline installation package.
  3. Download the latest versionSealos.
  4. Synchronize the time of all servers.

Sealos project address: https://github.com/fanux/sealos/releases

Kubernetes offline installation package: https://github.com/sealstore/cloud-kernel/releases/

2. Deploy the highly available Kubernetes cluster via Sealos

Sealos now supports the latest version of Kubernetes 1.16.0 for high availability cluster installations.

2.1 Common Sealos Parameters

--master Master server ADDRESS list --node node server address list --user SSH user name of the server --passwd SSH user password of the server -- PKg-url Specifies the location of the offline package, which can be a local directory. Pk specifies the location of the SSH private key. The default value is /root/.ssh/id_rsa Other flags: --kubeadm-config string kubeadm-config.yaml used to specify a custom kubeadm configuration file -- VIP string Virtual IP (default "10.103.97.2") virtual under local load IP address. You are not recommended to change the IP addressCopy the code

2.2 Deploying a Kubernetes cluster with a single primary node

Deploying a Kubernetes cluster through Sealos is very simple and usually requires only the following two instructions to complete the installation.

$wget https://github.com/fanux/sealos/releases/download/v2.0.7/sealos && \ chmod + x sealos && sealos mv/usr/bin $ Sealos init --passwd YOUR_SERVER_PASSWD \ --master 192.168.0.2 --master 192.168.0.3 --master 192.168.0.4 \ --node 192.168.0.5 \ - PKG url. - https://sealyun.oss-cn-beijing.aliyuncs.com/cf6bece970f6dab3d8dc8bc5b588cc18-1.16.0/kube1.16.0.tar.gz \ - version v1.16.0Copy the code

If your server has been configured with SSH password-free login, you can use the corresponding key to deploy it.

$SEALos init --master 192.168.0.2 --node 192.168.0.3 --pkg-url https://YOUR_HTTP_SERVER/kube1.15.0.tar.gz --pk / root/kubernetes. Pem \ -- -- version v1.16.0Copy the code

If you need other Kubernetes version offline package, but to the Sealos’s official website http://store.lameleg.com/ to download.

2.3 Deploy a highly available Kubernetes cluster with multiple primary nodes

$SEALOS init --master 192.168.0.2 --master 192.168.0.3 --master 192.168.0.4 --node 192.168.0.5 --user root \ --passwd your-server-password \ --version v1.16.0 \ --pkg-url /root/kube1.16.0.tar.gzCopy the code

2.4 Verifying the deployment

$kubectl get node NAME STATUS ROLES AGE VERSION izj6cdQw4O4o9tc0q44rz Ready master 2m25s v1.16.0 Izj6cdqfqw4o4o9tc0q44sz Ready Master 119s v1.16.0 IZJ6CDQFQW4O4O9TC0q44TZ Ready Master 63s v1.16.0 Izj6cdqfqw4o4o9tc0q44uz Ready <none> 38s v1.16.0 $kubectl get pod --all-namespaces NAMESPACE NAME Ready STATUS RESTARTS  AGE kube-system calico-kube-controllers-5cbcccc885-9n2p8 1/1 Running 0 3m1s kube-system calico-node-656zn 1/1 Running 0  93s kube-system calico-node-bv5hn 1/1 Running 0 2m54s kube-system calico-node-f2vmd 1/1 Running 0 3m1s kube-system calico-node-tbd5l 1/1 Running 0 118s kube-system coredns-fb8b8dccf-8bnkv 1/1 Running 0 3m1s kube-system coredns-fb8b8dccf-spq7r 1/1 Running 0 3m1s kube-system etcd-izj6cdqfqw4o4o9tc0q44rz 1/1 Running 0 2m25s kube-system etcd-izj6cdqfqw4o4o9tc0q44sz 1/1 Running 0 2m53s kube-system etcd-izj6cdqfqw4o4o9tc0q44tz 1/1 Running 0 118s kube-system  kube-apiserver-izj6cdqfqw4o4o9tc0q44rz 1/1 Running 0 2m15s kube-system kube-apiserver-izj6cdqfqw4o4o9tc0q44sz 1/1 Running 0 2m54s kube-system kube-apiserver-izj6cdqfqw4o4o9tc0q44tz 1/1 Running 1 47s kube-system kube-controller-manager-izj6cdqfqw4o4o9tc0q44rz 1/1 Running 1 2m43s kube-system kube-controller-manager-izj6cdqfqw4o4o9tc0q44sz 1/1 Running 0 2m54s kube-system kube-controller-manager-izj6cdqfqw4o4o9tc0q44tz 1/1 Running 0 63s kube-system kube-proxy-b9b9z 1/1 Running 0 2m54s kube-system kube-proxy-nf66n 1/1 Running 0 3m1s kube-system kube-proxy-q2bqp 1/1 Running 0 118s kube-system kube-proxy-s5g2k 1/1 Running 0 93s kube-system kube-scheduler-izj6cdqfqw4o4o9tc0q44rz 1/1 Running 1 2m43s kube-system kube-scheduler-izj6cdqfqw4o4o9tc0q44sz 1/1 Running 0 2m54s kube-system kube-scheduler-izj6cdqfqw4o4o9tc0q44tz 1/1 Running 0 61s kube-system kube-sealyun-lvscare-izj6cdqfqw4o4o9tc0q44uz 1/1 Running 0 86sCopy the code

2.5 The most simple and crude video tutorial

If you don’t think the above tutorial is intuitive enough, here’s a simpler way to learn. Click on the video here and get started!

2.6 Upgrading the Kubernetes Cluster Version

The Kubernetes cluster is currently in a period of rapid iteration, with each new release providing many new features. Upgrading to the Kubernetes cluster version is routine, Sealos also provides very convenient features to help you quickly complete the Kubernetes cluster upgrade. Kubernetes cluster upgrade generally requires the following steps:

  1. Upgrade all nodesKubeadmAnd import a new image.
  2. upgradeMasternodesKubelet.
  3. Other upgradesMasterNode.
  4. upgradeNodeNode.
  5. Verify cluster status.

2.6.1 upgrade Kubeadm

This step is mainly used to update binary files such as Kubeadm, Kubectl, and Kubelet, and import the image of the new version. The upgrade is as simple as copying the offline package to all nodes and executing the following command.

$ cd kube/shell && sh init.shCopy the code

2.6.2 Upgrading Kubelet on the Master Node

Upgrading Kubelet is easy, just copy the new version of Kubelet to /usr/bin to replace the old version and restart the Kubelet service.

$kubeadm upgrade plan $kubeadm upgrade apply v1.16.0Copy the code

The most important kubeadm upgrade apply command does the following.

  • Verify that the cluster can be upgraded and execute the version upgrade policy.
  • Check whether the image in the offline package is available.
  • Upgrade the container of the control component and roll back on failure.
  • rightKube-DNSKube-ProxyUpgrade.
  • Create a new certificate file and back up the old certificate file.

2.6.3 Upgrading Other Master Nodes

$ kubeadm upgrade applyCopy the code

2.6.4 Upgrading a Node

Before upgrading a Node, evict the Node.

$ kubectl drain $NODE --ignore-daemonsetsCopy the code

Next, update the Kubelet configuration file and upgrade the Kubelet of the Node.

$kubeadm upgrade node config --kubelet-version v1.16.0Copy the code

Finally, restore the Node Node to a schedulable state.

$ kubectl uncordon $NODECopy the code

2.6.5 Verifying the Cluster Upgrade

$ kubectl get nodesCopy the code

If the output node version information is the same as the upgraded version, everything is done!

3. Clear the cluster

If you need to quickly clean up your deployed Kubernetes cluster environment, you can use the following command to do it quickly.

$ sealos clean \
    --master 192.168.0.2 \
    --master 192.168.0.3 \
    --master 192.168.0.4 \
    --node 192.168.0.5 \
    --user root \
    --passwd your-server-passwordCopy the code

This concludes the basic approach to quickly deploy a production-grade Kubernetes high availability cluster using Sealos. If you are interested in Sealos, you can also explore more advanced features on the Sealos website.

For rapid deployment in a production environmentKubernetesWhat other ways can you be more efficient with high availability clusters? Welcome to discuss in the comments!