1. Resource preparation

System: CentOS 7.9.2009

The host name ip component
k8s-master1 192.168.219.161 Kube-apiserver, kube-controller-manager, kube-scheduler, etcd
k8s-node1 192.168.219.162 Kubelet, Kube-Proxy, Docker, ETCD
k8s-node2 192.168.219.163 Kubelet, Kube-Proxy, Docker, ETCD

2. Software information

software version
docker 19.03.11
kubernetes 1.18.18

3. Docker deployment

Reference: https://segmentfault.com/a/11… You need to delete “exec-opts”: [” native.cgroupDriver =systemd”],

cat > /etc/docker/daemon.json << EOF
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://n0k07cz2.mirror.aliyuncs.com"]
}
EOF

4. System configuration (master, node)

Reference: https://segmentfault.com/a/11…

5. Deploy the ETCD cluster

ETCD is a distributed key-value storage system. Kubernetes uses ETCD for data storage, so prepare an ETCD database first. In order to solve the single point of failure of ETCD, it should be deployed in a cluster mode. Can tolerate 2 machine failure.

The name of the node IP
etcd-1 192.168.219.161
etcd-2 192.168.219.162
etcd-3 192.168.219.163

Note: In order to save machine, this is reused with K8S node machine. It can also be deployed independently of the K8S cluster, as long as the APIServer is connected to it.

5.1 Prepare the CFSSL certificate generation tool

CFSSL is an open source certificate management tool that uses JSON files to generate certificates, making it easier to use than OpenSSL. Pick any server to operate on, and use the Master node here.

Wget wget wget HTTP: / / https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod + x cfssl_linux amd64 cfssljson_linux - amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

5.2 Generate ETCD certificate

5.2.1 Create a working directory

mkdir -p ~/TLS/etcd
cd ~/TLS/etcd

5.2.2 From the issuing authority (CA)

Since the signing CA:

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
 
cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

Generate certificate:

# generate certificate CFSSL gencert - initca ca - CSR. Json | cfssljson - bare ca - # to check the certificate of the ls * pem ca - key. Pem ca. Pem

5.2.3 Use self-signed CA to issue ETCD HTTPS certificate

Create certificate application file:

cat > server-csr.json << EOF { "CN": "etcd", "hosts": [" 192.168.219.161 192.168.219.162 ", ""," 192.168.219.163] ", "key" : {" algo ":" rsa ", "size" : 2048}, "names" : [{" C ": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF

Note: The IP in the HOSTS field of the above file is the intra-cluster communication IP of all ETCD nodes. None of them can be less! In order to facilitate the later expansion can be written several reserved IP.

Generate certificate:

# generate certificate CFSSL gencert - ca = ca. Pem - ca - key = ca - key. Pem - config = ca - config. Json - profile = WWW server - CSR. Json | cfssljson - bare Ls server*pem server-key.pem server.pem

5.3. Download binaries from GitHub

Download: https://github.com/etcd-io/et…

5.4 Deploy the ETCD cluster

The following is done on node 1. To simplify the operation, all files generated by node 1 will be copied to nodes 2 and 3 later

5.4.1 Create a working directory and extract the binary package

Mkdir /opt/etcd/{bin, CFG, SSL} -p tar ZXVF etcd-v3.4.9-linux-amd64.tar.gz mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} -p tar ZXVF etcd-v3.4.9-linux-amd64/{etcd,etcdctl} -p tar ZXVF etcd-v3.4.9-linux-amd64/ /opt/etcd/bin/

5.4.2 Create the ETCD configuration file

Cat > /opt/etcd/ CFG /etcd. Conf << EOF #[Member] ETCD_NAME="etcd-1" Node 3 etcd instead - 3 ETCD_DATA_DIR = "/ var/lib/etcd/default etcd" # ETCD_LISTEN_PEER_URLS = "https://192.168.219.161:2380" Change here for the current server IP ETCD_LISTEN_CLIENT_URLS = "https://192.168.219.161:2379" # # change here for the current server IP/Clustering ETCD_INITIAL_ADVERTISE_PEER_URLS = "https://192.168.219.161:2380" # change here for the current server IP ETCD_ADVERTISE_CLIENT_URLS = "https://192.168.219.161:2379" # change here for the current server IP ETCD_INITIAL_CLUSTER = "etcd - 1 = https://192.168.219.161:2380, etcd - 2 = https://192.168.219.162:2380, etcd - 3 = https://192.168.219 .163:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF

Description:

  • ETCD_NAME: Node name, unique in the cluster
  • ETCD_DATA_DIR: The data directory
  • ETCD_LISTEN_PEER_URLS: Cluster communication listening address
  • ETCD_LISTEN_CLIENT_URLS: The client access listening address
  • ETCD_INITIAL_COMMERSE_PEER_URLS: Cluster notification address
  • ETCD_COMMERCISE_CLIENT_URLS: Client notification address
  • ETCD_INITIAL_CLUSTER: The address of the cluster node
  • ETCD_INITIAL_CLUSTER_TOKEN: Cluster Token
  • ETCD_INITIAL_CLUSTER_STATE: The current state of joining a cluster, new is the new cluster, and existing means joining an existing cluster

5.4.3 Systemd manages ETCD

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF

5.4.4 Copy the certificate just generated

Copy the certificate you just generated to the path in the configuration file

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

5.4.5 Start and set the boot to start

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

5.4.6. Copy all generated files from node 1 above to nodes 2 and 3

SCP - r/opt/etcd/[email protected]: / opt/SCP/usr/lib/systemd/system/etcd. Service [email protected]: / usr/lib/systemd/system/SCP - r/opt/etcd/[email protected]: / opt/SCP The/usr/lib/systemd/system/etcd. Service [email protected]: / usr/lib/systemd/system /

Then change the node name and the current server IP in the /opt/etcd/ CFG /etcd. Conf configuration file in Node 2 and Node 3 respectively. Finally start the ETCD and set the boot boot, as above.

5.5. Check the cluster status

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem - endpoints = "https://192.168.219.161:2379, https://192.168.219.162:2379, https://192.168.219.163:2379," the endpoint health

Output:

https://192.168.219.161:2379 is healthy: successfully committed proposal: Took = 8.154404 ms https://192.168.219.163:2379 is healthy: successfully committed proposal: Took = 9.044117 ms https://192.168.219.162:2379 is healthy: successfully committed proposal: took = 10.000825 ms

If the above information is printed, the cluster deployment is successful. The first step if you have a problem is to look at the logs: /var/log/message or journalctl-u etcd

6. Deploy the Master Node

6.1 Create a working directory

mkdir -p ~/TLS/k8s
cd ~/TLS/k8s

6.2 Generate the Kube-Apiserver certificate

6.2.1 From the issuing authority (CA)

Since the signing CA:

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

Generate certificate:

# generate certificate CFSSL gencert - initca ca - CSR. Json | cfssljson - bare ca - # to check the certificate of the ls * pem ca - key. Pem ca. Pem

6.2.2 Use a self-signed CA to issue a Kube-Apiserver HTTPS certificate

Create certificate application file:

cat > server-csr.json << EOF { "CN": "kubernetes", "hosts": [" 10.0.0.1 127.0.0.1 ", ""," 192.168.219.161 ", "192.168.219.162", "192.168.219.163", "192.168.219.164." "192.168.219.181", "192.168.219.182", "192.168.219.188", "kubernetes", "kubernetes.default", "kubernetes.default", "kubernetes.default", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF

Note: The IP in the HOSTS field of the above file is all Master/LB/VIP IP, and none of them can be less! In order to facilitate the later expansion can be written several reserved IP.

6.2.3 Generate certificates

# generate certificate CFSSL gencert - ca = ca. Pem - ca - key = ca - key. Pem - config = ca - config. Json - profile = kubernetes server - CSR. Json | Cfssljson-bare server # ls server*pem server-key.pem server.pem

6.4 Download binaries from GitHub

6.4.1, download

Download address: https://github.com/kubernetes…

Note: Open the link and you will find that there are many packages in it. Download one server package, which contains the Master and Worker Node binaries.

6.4.2 Unpack the binary package

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

6.4.3 Deploy Kube-Apiserver

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF KUBE_APISERVER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/opt/kubernetes/logs \\ - etcd - the servers = https://192.168.219.161:2379, https://192.168.219.162:2379, https://192.168.219.163:2379 \ \ --bind-address=192.168.219.161 \\ --secure-port=6443 \\ -- pile-address =192.168.219.161 \\ --allow-privileged=true \ \ - service - cluster - IP - range = 10.0.0.0/24 \ \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --enable-bootstrap-token-auth=true \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-32767 \\ --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\ --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/etcd/ssl/ca.pem \\ --etcd-certfile=/opt/etcd/ssl/server.pem \\ --etcd-keyfile=/opt/etcd/ssl/server-key.pem \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/opt/kubernetes/logs/k8s-audit.log" EOF

Note: The first of the above two \ \ is an escape character and the second is a newline character. Escape characters are used to preserve newline characters using EOF.

Description:

  • — LOGTOSTDERR: Enable logging
  • — V: Log level
  • — log-dir: The log directory
  • — ETCD – Servers: ETCD cluster address
  • — bind-address: listen on address
  • — secure-port: HTTPS secure port
  • — pile-address: cluster notification address
  • — Allow-Privileged — Privileged
  • — service-cluster-ip-range: Service virtual IP address segment
  • — enable-admission-plugins: Enable-admission-plugins
  • — Authority-mode: Authentication authorization, enabling RBAC authorization and node self-management
  • — enable-bootstrap-toen-auth: Enable the TLS bootstrap mechanism
  • — toen-auth-file: bootstrap token file
  • — service-node-port-range: The service nodeport type allocates the port range by default
  • — kubelet-client-xxx: Apiserver accesses the kubelet client certificate
  • — TLS -xxx-file: APIServer HTTPS certificate
  • — etcd-xxxfile: Connect to the ETCD cluster certificate
  • — audit-log-xxx: The audit log

6.4.4 Copy the newly generated certificate

Copy the certificate you just generated to the configuration file:

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

TLS Bootstrapping Enable TLS Bootstrapping After TLS authentication is enabled for Master apiserver, Node Node Kubelet and Kube-Proxy must use valid certificates issued by CA to communicate with Kube-apiserver. When there are many Node nodes, the issuance of such client certificates requires a lot of work. It also increases the complexity of cluster scaling. To simplify the process, Kubernetes introduced the TLS Bootstraping mechanism to automatically issue client certificates. Kubelet automatically requests certificates from Apiserver as a low-power user, and Kubelet’s certificates are signed dynamically by Apiserver. Therefore, it is strongly recommended to use this method in Node. Currently, it is mainly used in Kubelet. Kube-proxy still issues a certificate uniformly by us.

TLS Bootstraping Workflow:

Create the token file in the above configuration file:

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

Format: token, username, UID, user group

Token can also generate its own substitution:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

6.5. Systemd manages APIServer

6.5.1 Create a Service

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf  ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF

6.5.2 Start and set the boot to start

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

The kubelet-bootstrap user is allowed to request a certificate

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

6.6 Deploy kube-controller-manager

6.6.1 Create a configuration file

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\ --v=2 \ \ - log - dir = / opt/kubernetes/logs \ \ - leader - well = true \ \ - master = 127.0.0.1: \ \ 8080 - bind - address = 127.0.0.1 \ \ --allocate-node-cidrs=true \\ --cluster-cidr=10.244.0.0/16 \\ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --experimental-cluster-signing-duration=87600h0m0s" EOF

Description:

  • — master: Connect to APIServer on local non-secure local port 8080.
  • — leader-elect: Automatic election (HA) when the component is started multiple times
  • — cluster-signing-cert-file/ — cluster-signing-key-file: The CA that automatically issues certificates to Kubelet, consistent with Apiserver

6.6.2 Systemd manages controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF

6.6.3 Start and set startup

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

6.7. Deploy the Kube-scheduler

6.7.1 Create a configuration file

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF

Description:

  • — master: Connect to APIServer on local non-secure local port 8080.
  • — leader-elect: Automatic election (HA) when the component is started multiple times

6.7.2 The systemd manages the kube-scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf  ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF

6.7.3 Start and set the boot to start

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

Check the status of the cluster

All components have been started successfully. Check the current status of cluster components with the kubectl tool:

[root@localhost ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}  

The output above indicates that the Master node component is functioning properly.

7. Deploy the Worker Node

The next operation is still on the Master Node, that is, as a Worker Node

7.1. Create working directory and copy binaries

Create a working directory for all worker nodes:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 

Copy from master node:

CD kubernetes/server/bin cp kubelet kube-proxy /opt/kubernetes/bin # local copy

7.2. Deploy Kubelet

7.2.1 Create a configuration file

cat > /opt/kubernetes/cfg/kubelet.conf << EOF KUBELET_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/opt/kubernetes/logs \\ --hostname-override=k8s-master1 \\ --network-plugin=cni \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --config=/opt/kubernetes/cfg/kubelet-config.yml \\ --cert-dir=/opt/kubernetes/ssl \\ - pod - infra - container - image = lizhenliang/pause - amd64:3.0 "EOF

Description:

  • — hostname-override: Display name, unique in cluster
  • — network-plugin: Enable CNI
  • — kubeconfig: The empty path, which is automatically generated, is used later to connect to the Apiserver
  • — bootstrap-kubeconfig: Apply for a certificate from Apiserver for the first time
  • — config: configuration parameter file
  • — cert-dir: Kubelet certificate generation directory
  • — pod-infra-container-image: Manage the image of a pod network container

7.2.2. Configuration parameter file

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF kind: KubeletConfiguration apiVersion: Kubelet. Config. K8s. IO/v1beta1 address: 0.0.0.0 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: -10.0.0.2 Clusterdomain: Cluster. local failSwapOn: false authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /opt/kubernetes/ssl/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110 EOF

7.2.3 Generate the bootstrap.kubeconfig file

KUBE_APISERVER = "https://192.168.219.161:6443" # apiserver IP: PORT TOKEN = "c47ffb939f5ca36231d9e3121a252940 #" # create kubelet bootstrap kubeconfig config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig kubectl config set-credentials "kubelet-bootstrap" \ --token=${TOKEN} \ --kubeconfig=bootstrap.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user="kubelet-bootstrap"  \ --kubeconfig=bootstrap.kubeconfig kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

7.2.4 Copy to configuration file path

cp bootstrap.kubeconfig /opt/kubernetes/cfg

7.2.5. Systemd manages Kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
 
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF

7.2.6 Start and Set Up:

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

Approve the Kubelet certificate application and join the cluster

Kubectl get CSR NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A 6m3s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap # Pending approval kubectl certificate approve node - CSR - uCEGPOIiDdlLODKts8J658HrFq9CZ - K6M4G7bjhk8A # check nodes kubectl get node Name STATUS Roles AGE VERSION K8S-MASTER1 NotReady <none> 7S v1.18.18

Note: Since the network plug-in has not been deployed, the node will not be ready to be NOTREADY

7.4. Deploy Kube-proxy

7.4.1 Create the configuration file

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

7.4.2 Configuration parameter file

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF kind: KubeProxyConfiguration apiVersion: Kubeproxy. Config. K8s. IO/v1alpha1 bindAddress: 0.0.0.0 metricsBindAddress: 0.0.0.0:10249 clientConnection: kubeconfig: /opt/kubernetes/ CFG/kube.kubeconfig Override: K8S-master1 clusterCIDR: 10.0.0.0/24 EOF

7.4.3 Generate the file of kube-proxy.kubeconfig

Switch working directory:

cd ~/TLS/k8s

Create a kube-proxy certificate request file:

cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

Generate certificate:

# generate certificate CFSSL gencert - ca = ca. Pem - ca - key = ca - key. Pem - config = ca - config. Json - profile = kubernetes kube - proxy - CSR. Json | Cfssljson-bare kube-proxy # Check the certificate ls kube-proxy*pem kube-proxy-key.pem kube-proxy.pem

Generate the KubeconFig file:

KUBE_APISERVER = "https://192.168.219.161:6443" kubectl config set - cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

Copy to configuration file specified path:

cp kube-proxy.kubeconfig /opt/kubernetes/cfg/

7.4.4. System d manages Kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS  Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF

7.4.5 Start and set the boot to start

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

7.5. Deploy CNI network

7.5.1 Prepare CNI binary files first

Download address: https://github.com/containern…

7.5.2 Unzip the binary package and move it to the default working directory

Mkdir -p /opt/cni/bin tar ZXVF cni-plugins-linux-amd64-v0.8.6. tgz-c /opt/cni/bin

7.5.3 Deploy CNI network

# # download kube - flannel. Yml wget HTTP: / / https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml The default mirror address cannot be accessed. Sed is modified to docker hub mirror warehouse - I - r "s # quay. IO/coreos/flannel:. * - amd64 # lizhenliang/flannel: v0.12.0 - amd64 # g" kube - flannel. Yml # Pod kubectl get pods -n kube-system NAME READY STATUS readstarts AGE Node kubectl get node NAME STATUS Roles AGE VERSION k8s-master1 Ready The < none > 41 m v1.18.18

7.6 Authorize Apiserver to access Kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
 
kubectl apply -f apiserver-to-kubelet-rbac.yaml

7.7. Added Worker Node

Copy the deployed Node files to the new Node

7.7.1 Copy the Worker Node related files to the new Node 192.168.219.162/163 on the master Node

Scp-r /opt/kubernetes [email protected]:/opt/ scp-r /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]: / usr/lib/systemd/system SCP - r/opt/the cni/[email protected]: / opt/SCP/opt/kubernetes/SSL/ca pem [email protected]: / opt/kubernetes/SSL

7.7.2. Delete the Kubelet certificate and the KubeconFig file

rm -rf /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -rf /opt/kubernetes/ssl/kubelet*

Note: These files are automatically generated after the approval of the certificate application. Each Node is different, so they must be deleted and regenerated.

7.7.3. Change the hostname

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1
 
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1

7.7.4 Start and set startup

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy

7.7.5 Approve new Node kubelet certificate application on Master

# see kubectl get CSR NAME AGE SIGNERNAME REQUESTOR CONDITION node - ztjsavsrhuyhigqsefxzvozdcnkei CSR - 4 - aE2jyTP81Uro 89 s Kubernetes. IO /kube-apiserver-client-kubelet/kubelet/bootstrap Pending kubectl certificate approve Node - ztjsavsrhuyhigqsefxzvozdcnkei CSR - 4 - aE2jyTP81Uro # to check the node state kubectl get the node NAME STATUS ROLES AGE VERSION K8s-master1 Ready <none> 65m v1.18.18k8s-node1 Ready <none> 12m v1.18.18k8s-node2 Ready <none> 81s v1.18.18

Same as above for K8S-Node2 (192.168.219.163) node. Remember to change the hostname!

8. Deploy Dashboard

8.1. Deploy Dashboard

Wget HTTP: / / https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

By default Dashboard can only be accessed from within the cluster, change the Service type to NodePort to expose it to the outside:

vi recommended.yaml kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 nodePort: 30001 type: Nodeport selector: k8s-app: kubernetes-dashboard # Creating Flannel Network kubectl apply-f recommended. Yaml

To view:

kubectl get pods,svc -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE pod/dashboard-metrics-scraper-694557449d-z8gfb 1/1 Running 0 2m18s pod/kubernetes-dashboard-9774cc786-q2gsx 1/1 Running 0 2m19s NAME TYPE cluster-ip external-ip PORT(S) AGE service/ dashboat-metrics -scraper ClusterIP 10.0.0.141 <none> 8000/TCP 2M19S service/ Kubernetes-Dashboard Nodeport 10.0.0.239 <none> 443:30001/TCP 2M19S

Create a Service Account and bind to the default cluster-admin cluster role:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

Log in to Dashboard with the outgoing token.



9. Deploy CoreDNs

9.1. Download coredns.yaml

The coredns.yaml file downloaded using the wget command has syntax problems, so here I provide a yaml file without syntax problems

CoreDNs are used for intra-cluster Service name resolution. The extract code of the download link of coredns.yaml file is :pm5t

9.2 create coredns

Kubectl get pods -n kube-system NAME READY STATUS readstarts AGE coredns-5ffbfd976d-j6shb 1/1 Running 0 32s kube-flannel-ds-amd64-2pc95 1/1 Running 0 38m kube-flannel-ds-amd64-7qhdx 1/1  Running 0 15m kube-flannel-ds-amd64-99cr8 1/1 Running 0 26m

9.3 DNS resolution test

Parse command:

Kubectl run-it --rm dns-test --image= BusyBox: 1.28.4sh

Output:

If you don't see a command prompt, try pressing enter. / # nslookup kubernetes Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns. Kube-system. Svc.cluster. Local Name: kubernetes Address 1: 10.0.0.1 kubernetes. Default. SVC. Cluster. The local

That’s fine with parsing. At this point, the deployment of the single Master cluster is complete, and the single Master architecture is then extended to multiple masters.