1. The premise

In the actual production environment, it is often necessary to maintain multiple K8S clusters. Switching between multiple environments and nodes affects the work efficiency and does not conform to the concept of DevOPS. Therefore, the author tries to maintain multiple K8S clusters under a single node.

2. Request

  • Know the context of K8S
  • Understand kubeconFig of K8S
  • At least two K8S clusters

Experiment 3.

3.1 k8s cluster

  • Node T34 cluster
[root@t34 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION T31 Ready worker 156d v1.14.3 t32 Ready worker 70d v1.14.3 T34 Ready ControlPlane, ETCD,worker 199d v1.14.3 T90 Ready worker 156d v1.14.3 t91 Ready worker 169d v1.14.3Copy the code
  • Node node43 Cluster
[root@node43 ~]# kubectl  get nodes 
NAME     STATUS   ROLES                      AGE    VERSION
node43   Ready    controlplane,etcd,worker   121d   v1.14.3
Copy the code

3.2 kubeconfig file

Kubeconfig: kubectl: /root/.kube: config: kubectl: /root/.kube: config

  • Node43 cluster
[root@node43 ~]# kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA + OMITTED server: https://192.168.5.43/k8s/clusters/c-mg6wm name: test - cluster: certificate authority - DATA: DATA + OMITTED server: https://192.168.5.43:6443 name: test - node43 contexts: - the context: cluster: test the user: user-twwt4 name: test - context: cluster: test-node43 user: user-twwt4 name: test-node43 current-context: test kind: Config preferences: {} users: - name: user-twwt4 user: token: kubeconfig-user-twwt4.c-mg6wm:r7bk54gw2h5vpx6wqwbqrldzhp2nz5lppvf5cfgbgnwffsj7rfkjdpCopy the code
  • T34 cluster
[root@t34 canary]# kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA + OMITTED server: https://192.168.4.34/k8s/clusters/c-6qgsl name: test - cluster: certificate authority - DATA: DATA + OMITTED server: https://192.168.4.34:6443 name: test - t34 contexts: the context: cluster: test the user: the user - czbv6 name: test - context: cluster: test-t34 user: user-czbv6 name: test-t34 current-context: test kind: Config preferences: {} users: - name: user-czbv6 user: token: kubeconfig-user-czbv6.c-6qgsl:tznvpqkdw7mz6r8276h8zs5hbl45h2bv2g8jwfjqc8qckhgfwwz9rdCopy the code

3.3 configuration

Configure the cluster,user, and context for Node43 on T34

  • Add the cluster
[root @ t34 canary] # kubectl config set - cluster node43 - server = https://192.168.5.43:6443 - insecure - skip - TLS - verify = true Cluster "node43" set.Copy the code
  • Add user
[root@t34 canary]# kubectl config set-credentials node43-user --token=kubeconfig-user-twwt4.c-mg6wm:r7bk54gw2h5vpx6wqwbqrldzhp2nz5lppvf5cfgbgnwffsj7rfkjdp
User "node43-user" set.
Copy the code
  • Add the context
[root@t34 canary]# kubectl config set-context node43-context --cluster=node43 --user=node43-user
Context "node43-context" created.
Copy the code
  • To view
[root@t34 canary]# kubectl config view apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://192.168.5.43:6443 name: node43 - cluster: certificate authority - data: data + OMITTED server: https://192.168.4.34/k8s/clusters/c-6qgsl name: test - cluster: certificate authority - data: data + OMITTED server: https://192.168.4.34:6443 name: test - t34 contexts: - the context: cluster: node43 user: node43 - user name: node43-context - context: cluster: test user: user-czbv6 name: test - context: cluster: test-t34 user: user-czbv6 name: test-t34 current-context: test kind: Config preferences: {} users: - name: node43-user user: token: kubeconfig-user-twwt4.c-mg6wm:r7bk54gw2h5vpx6wqwbqrldzhp2nz5lppvf5cfgbgnwffsj7rfkjdp - name: user-czbv6 user: token: kubeconfig-user-czbv6.c-6qgsl:tznvpqkdw7mz6r8276h8zs5hbl45h2bv2g8jwfjqc8qckhgfwwz9rdCopy the code

3.4 test

Context is test, test of cluster (t34 cluster), and user is user-czbv6

[root@t34 canary]# kubectl config current-context test [root@t34 canary]# kubectl get nodes NAME STATUS ROLES AGE VERSION T31 Ready worker 156d v1.14.3t32 Ready worker 70d v1.14.3t34 Ready Controlplane, ETCD,worker 199d v1.14.3t90 Ready worker 169D v1.14.3 T91 Ready worker 169d v1.14.3Copy the code

Run the following command to switch the context to node43-context: node43 of cluster (node43 cluster); user is node43-user

[root@t34 canary]# kubectl config use-context node43-context Switched to context "node43-context". [root@t34 canary]# kubectl config current-context node43-context [root@t34 canary]# kubectl get nodes NAME STATUS ROLES AGE VERSION node43 Ready controlplane etcd, worker 121 d v1.14.3Copy the code

At this point, two K8S clusters are maintained on the T34 node, and more K8S clusters can be added in the same way, but switched through different contexts.

BTW: In the same cluster, the production environment and development environment can be separated using context