Kubernetes, as the most disruptive container arrangement technology in recent years, is widely used in the production environment of enterprises. Compared with the arrangement way of Docker-swarm in previous years, Kubernetes is undoubtedly standing in a higher perspective to manage containers, which is convenient for the universality of future projects and easy to expand the architecture.

The production environment pays more attention to the high availability of the cluster. Different from the single primary node in the test environment, at least two primary nodes and two node nodes need to be configured in the production environment to ensure that after the primary node fails, the Kubelet of the node can access the apiserver and other components of the other primary node for operation.

The K8S cluster built based on the above is as follows

K8s – master1 192.168.175.128

K8s-master2 192.168.175.148 (added)

K8s rac-node1 192.168.175.130

K8s – 2 192.168.175.131

First, high availability principle

Configure a new master node, and then install Nginx on each node. Nginx uses internal load balancing to reverse the requests that need to access the master and Kube-Apiserver components on the two K8S-Master nodes. In this way, the master node can be highly available. When any master node is down, the document can also be placed on another master node through nginx load balancing. Kube-scheduler and kube-Controller-Manager high availability sets the leader-ELECT parameter in both master profiles.

Kubernetes management services include KuBE-Scheduler and Kube-Controller-Manager. Kube-scheduer and Kube-Controller-Manager use a master-many-slave high availability scheme, allowing only one service to perform specific tasks at a time. Kubernetes implements a set of simple master selection logic, which relies on Etcd for scheduler and Controller-manager master selection. If scheduler and Controller-manager have the leader-ELECT parameter set at startup, they will first attempt to obtain the leader node identity after startup, and only after obtaining the leader node identity can specific business logic be executed. They create the kube-Scheduler and kube-Controller-Manager endpoints in Etcd, respectively, which record the current leader node and the last update time. The Leader node periodically updates the endpoint information to maintain its identity as the leader. Each slave node’s service periodically checks the endpoint information, and if the endpoint information is not updated within the time range, they attempt to update themselves as the Leader node. Scheduler services and controller-manager services do not communicate with each other. The strong consistency of Etcd can ensure the global uniqueness of the Leader node in the case of distributed and high concurrency.

Initialization

Host k8s – master2

Disabling the Firewall

# iptables -F
# setenforce 0
# mkdir -pv /opt/kubernetes/{ssl,cfg,bin}
Copy the code

A new master node is added to the K8S cluster, and the corresponding IP address of this node needs to be configured in the key Settings, as shown in the figure, in the server certificate application, so that the cluster communication includes this IP address.

After the new key pair is reapplied, restart the cluster, such as K8S-master1, k8S-node1, and k8S-node2, to re-apply the new key on each node.

Then send the communication key used between clusters created by K8S-master1 to /opt/kubernetes/ SSL of k8S-master2

3. Configure the master component

Kube-apiserver, kube-controller-manager, and kube-scheduler are configured in /opt/kubernetes/ CFG /. The corresponding systemd startup options are the same as those of K8S-master1

[root@k8s-master2 ~]# cat /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true --v=4 - etcd - the servers = https://192.168.175.128:2379, https://192.168.175.130:2379, https://192.168.175.131:2379 - insecure - bind - address = 127.0.0.1 -- bind - address = 192.168.175.148 -- insecure - port = 8080 - secure - port = 6443 - advertise - address = 192.168.175.148 - allow - ring = true - service - cluster - IP - range = 10.10.10.0/24 --admission-control=NamespaceLifecycle,LimitRanger, SecurityContextDeny,ServiceAccount,ResourceQuota, NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/kubernetes/ssl/ca.pem --etcd-certfile=/opt/kubernetes/ssl/server.pem --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem" [root@k8s-master2 ~]# cat /opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="-- logTostderr =true --v=4 --master=127.0.0.1:8080 -- leader-ELECT =true --address=127.0.0.1 - service - cluster - IP - range 24 - cluster - name = = 10.10.10.0 / kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem" [root@k8s-master2 ~]# cat /opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 - master = 127.0.0.1:8080 - leader - well"Copy the code

Start three services, start kube-Apiserver first, the last two are not sequential requirements, at this time can not obtain the back-end node node status

Install and configure nginx

Configure nginx yum

# cat > /etc/yum.repos.d/nginx.repo << EOF [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/  gpgcheck=0 EOF # yum install nginx -yCopy the code

Nginx configuration, based on layer 4 load balancing, listens on the host port 127.0.0.1:6443

Kubeconfig = 127.0.0.1; kubeconfig = 127.0.0.1; This allows Kubelet to authenticate to 127.0.0.1:6443 on the host while communicating with the master, and nginx to catch the request and load balance between the two master nodes.

5. Cluster testing

① The master2 node is properly accessed

② Manually down master1 to check whether the master1 node can be accessed

Access normal, ha cluster configuration successful!!