I used OCP/OKD cluster in my previous experimental environment, which was built on the PC Server. Now I need to build a K8S cluster on my Windows workstation. The first thing I think of is to use Codeready Container(CRC) to build an OKD cluster. However, its resource requirements are really too high (4C9G memory), and the virtualization software required by CRC, Hyper-V, is in conflict with Virshbox. In view of this problem, although there is Minikube available, I decided to manually build an original K8S cluster.

The experimental environment is a virtual machine installed in virshbox with the operating system version of CentOS Linux 8, 4 CPUs, 4G memory, and 35G disk. It has two network cards. The network card 1 is connected to the NAT network so as to use the Host network to connect to the Internet, while the network card 2 is connected to the hosting-only network. The K8S API is listening on this network with the host name

The host name Host resources Card 1 The network card 2 4C4GAnd Disk:35G NatNetwork,dhcpTo obtainip Host-onlyNetwork, static configurationIPfor192.168.110.6

Install the container runtime

For example, the author is really not interested in the Docker container runtime 1, so he plans to install the CRI-O lightweight container runtime. Refer to the following figure, the author plans to install K8S 1.18, so he chooses CRI-O 1.18.x.

For the CentOS Linux 8 operating system, install the CRI-O container runtime 2 by executing the following command:

sudo dnf -y install 'dnf-command(copr)' sudo dnf -y copr enable rhcontainerbot/container-selinux sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo Po sudo curl - L - o/etc /. Yum repos. D/devel: kubic: libcontainers: stable: cri - o: 1.18 repo S :stable:cri-o:1.18. Repo sudo dnf-y install cri-o

Note: The author encountered a bug while currently installing CRIO-O 1.18.1: The conmon path in /etc/crio/crio.conf is /usr/libexec/crio/conmon, but the actual location is /usr/bin/conmon, so restart the crio process by following the following command:

sed 's|/usr/libexec/crio/conmon|/usr/bin/conmon|' -i /etc/crio/crio.conf
systemctl start crio.service

We install Crio’s management tool, crictl3, which Kubeadm will use to pull the image.

wget -O crictl.tgz tar xf crictl.tgz mv crictl /usr/local/bin

IO and k8s.gcr. IO. As shown below, we configured the mirror repository for docker. IO and k8s.gcr. IO. Note: For k8s. GCR. IO configuration mirror warehouse is very important, although we can install – image – repository to inform kubeinit ali cloud images from the warehouse to download images, Io /pause Mirror is still used when POD is deployed, so in order to avoid any error, configure the Mirror repository here.

cat > /etc/containers/registries.conf <<EOF
unqualified-search-registries = ["", ""]

  location = ""
  insecure = false
  blocked = false
  mirror-by-digest-only = false
  prefix = ""

    location = ""
    insecure = false

  location = ""
  insecure = false
  blocked = false
  mirror-by-digest-only = false
  prefix = ""

    location = ""
    insecure = false

Refer to Network Plugins. When no CNI Network Plugins are configured for Kubelet, the Noop Plugins used rely on a Linux bridge to transfer traffic between containers, or the default Network Plugins for Docker or Crio4 also rely on a bridge to transfer traffic. At this point, load the br_netfilter module and set the net.bridge.bridge-xxx=1 so that iptables can recognize the traffic being transferred by the bridge. Note: If the subsequent SDN we choose does not rely on the Linux bridge for traffic, this is actually negligible, but there are no problems with the configuration.

# these persist across reboots.
cat > /etc/modules-load.d/k8s.conf <<EOF

modprobe overlay
modprobe br_netfilter

# Set up required sysctl params, these persist across reboots.
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1

sysctl --system

Start the installationkubeadm

With the CRI-O container running in the last section, we’ll install Kubeadm in this section, but before that, we’ll take a look at the official Installing Kubeadm documentation for some important things to do and refine the other steps needed.

  • While it doesn’t make any sense here for the single-node cluster built in this article, I’ll note it here for completeness: Check each node of the cluster with a separate host name,MACAddress andproduct_uuid(cat /sys/class/dmi/id/product_uuid);
  • The host needs to configure a firewall to release some ports. For simplicity, disable the host firewall:
systemctl stop firewalld
systemctl disable firewalld
  • To disable the systemSwap, otherwise,kubeadmComplains.
D99-disable-swap.conf <<EOL vm.swappiness=0 EOL sysctl --system vi /etc/fstab # annotate swap line
  • disableselinuxOr is set topermissive, otherwise,kubeadmComplains.
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

Then, execute the following command to install kubelet, kubeadm, kubectl, and then set kubelet boot from the boot.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
exclude=kubelet kubeadm kubectl

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet


  • Due to network problem, hereyumSource we choose Ali cloud warehouse, and because of the current noEL8Warehouse, so this place can only chooseEL7;
  • At this timekubeleteIt will keep restarting, but that’s normal behavior, and it’s waiting for us to runkubeadmInitialize or join an existingk8sThe cluster.

Currently Kubeadm can only automatically detect its cgroup driver for the Docker container runtime, whereas for the Crio container runtime we configured, we need to manually configure the cgroup driver for it.

When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the
/var/lib/kubelet/config.yaml file during runtime.


The automatic detection of cgroup driver for other container runtimes like CRI-O and containerd is work in progress.

We configure the crio cgroup drive for systemd rather than cgroupfs, as shown in the official documentation need to configure the/var/lib/kubelet/config. The yaml file, in which set cgroupDriver: systemd.

$ cat /etc/crio/crio.conf |grep systemd
cgroup_manager = "systemd"
$ mkdir -p /var/lib/kubelet
$ cat > /var/lib/kubelet/config.yaml <<EOL
kind: KubeletConfiguration
cgroupDriver: systemd
$ systemctl daemon-reload
$ systemctl restart kubelet

However, there is a problem that the cGroupDriver value is not preserved after kubeadm init is executed, which causes kubelet to call the crio container runtime exception, as shown below.

% journalctl -u kubelet -f
... RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = cri-o configured with systemd cgroup manager, but did not receive slice as parent: /kubepods/besteffort/pod6407b05153e245d7313ea88bfb3be36a

For this reason, it is not recommended to configure the parameters in /etc/sysconfig/kubelet, /etc/default/kubelet or /var/lib/kubelet/kubeadm-flag.env, but we do:

cat > /etc/sysconfig/kubelet <<EOF

Note: As mentioned in the “Setup ContainerRuntime” section, even if we specify the address of the mirror repository with the –image-repository parameter when we call kubeadm init, the pause image is taken from k8s.gcr. IO and causes an exception. We configure the mirror repository for k8s.gcr. IO at container runtime, but there is another option: adjust the kubelet configuration to specify the Pause mirror.

# /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS = -

Create a single control panel cluster

To create a single control-plane cluster with Kubeadm, execute the following command to create a single control-plane cluster. Note: Among them, the POD network segment set by — POD-network-CIDR should not overlap with the existing network segment, that is, the network segment is currently idle, because the virtual machine has two network cards, and its default route is on the NAT network segment. In order to avoid the cluster API server monitoring the NAT network card, the POD network segment should not overlap with the existing network segment. Specify the segment on which the apiserver is using host-only by using — apiserver-pile-address.

Kubeadm init \ -- apiserver-pile-address = \ --pod-network-cidr=

If the command does not report an error, the following message will be displayed on success:

Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one  of the options listed at: Then you can join any number of worker nodes by running the following on each as root: Kubeadm join --token nH1erl. D8eh61epm8s4y8oj \ --discovery-token-ca-cert-hash sha256:dce7e5ffc2d3d8662ab48bb1a3eae3fff8e0cbf65784295ac01cf631bbfe5ba1

We perform the following command for the client tools kubectl configuration context, the file/etc/kubernetes/admin. Conf with the whole cluster administrator privileges.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

At this point, we can check the status of the cluster through kubectl, as follows:

# create four namespaces by default:  $ kubectl get namespaces NAME STATUS AGE default Active 54m kube-node-lease Active 54m kube-public Active 54m $kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS: $kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS: $kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-4hfws 1/1 Running 0 55m kube-system coredns-66bff467f8-rm5lp 1/1 Running 0 55m kube-system 1/1 Running 0 56m kube-system 1/1 Running 0 56m kube-system 1/1 Running 0 56m kube-system kube-proxy-zcbjj 1/1 Running 0 55m kube-system 1/1 Running 0 56m

Note: Etcd, kube – apiserver, kube controller – manager, kube scheduler component to static pattern deployment, the deployment of listing on the host/etc/kubernetes/manifests catalog, Kubelet will automatically load this directory and start POD.

$ ls -l /etc/kubernetes/manifests/
-rw------- 1 root root 1858 Jun  8 20:33 etcd.yaml
-rw------- 1 root root 2709 Jun  8 20:33 kube-apiserver.yaml
-rw------- 1 root root 2565 Jun  8 20:33 kube-controller-manager.yaml
-rw------- 1 root root 1120 Jun  8 20:33 kube-scheduler.yaml

Coredns are deployed using Deployment, while Kube-Proxy is deployed using Daemonset mode:

$ kubectl get ds,deploy -n kube-system
NAME                        DESIRED           NODE SELECTOR            AGE
daemonset.apps/kube-proxy   1         ...   60m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           60m

Coredns deployed two PODs, which were not necessary for our single-node experiment environment, and set them to 1:

kubectl scale deployment/coredns --replicas=1 -n kube-system

The cluster also lacks SDN network, so we chose Calico, which not only has high performance, but also supports network strategy. Refer to the document QuickStart for Calico on Kubernetes. When the system is configured with NetworkManager to manage the network, to avoid affecting Calico, the following file is configured to tell NM not to manage Calico’s network interface.

cat > /etc/NetworkManager/conf.d/calico.conf <<'EOF' [keyfile] unmanaged-devices=interface-name:cali*; interface-name:tunl* EOF

Next, we execute the following command to deploy Calico. Note: In addition to deploying Calico directly using the deployment manifest, we can also deploy through operator, with the project address tigera/operator.

kubectl apply -f

Wait for its POD to run properly by doing the following:

$ watch kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-76d4774d89-bgt86   1/1     Running   0          15m
calico-node-7gjls                          1/1     Running   0          17m

The control node is not allowed to schedule POD by default, but for our single-node cluster, we must make the control node schedulable POD in order to test, so execute the following command to release this restriction:

kubectl taint nodes --all

inK8SCluster Deployment Applications

We have built a most basic K8S cluster, which currently only contains the K8S core control components and Calico SDN network. Although the cluster has only one single node, we have configured Master to schedule POD, so we can deploy the test application at this time.

We are currently in the default namespace and quickly deploy an nginx with the following command:

kubectl create deployment nginx --image=nginx

This NGINX deployment contains only one pod, as shown below:

$ kubectl get pod --show-labels -w
NAME                    READY   STATUS    RESTARTS   AGE   LABELS
nginx-f89759699-lrvzq   1/1     Running   1          15h   app=nginx

Get its IP address and access:

$kubectl describe pod - l app = nginx | grep ^ IP: IP: $curl <title>Welcome to nginx! </title>

Create a service for deployment:

$ kubectl expose deploy nginx --port=80 --target-port=80 $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Nginx ClusterIP <none> 80/TCP 34M


The Kube-proxy has been GA in K8S v1.11 using the IPVS mode. It has higher performance than iptables, but that does not mean iptables are useless. In fact, ipvs will work in conjunction with iptables.

If we do not perform kubeadm init to initialize the cluster, we can set up kube-proxy to use IPVS mode as follows:

cat > config.yml <<'EOF'
      SupportIPVSProxyMode: true
    mode: ipvs
kube init --config config.yml

For the K8S cluster that is already in use, we execute the following command on each node to load the IPVS module. Note: I found that the following module has been loaded automatically in my experimental environment. If so, there is no need to deal with it.

Modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack # Make sure that cat > is loaded on startup /etc/modules-load.d/ipvs.conf <<'EOF' ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack EOF

Update configMap to adjust mode to ipvs:

$ kubectl edit cm -n kube-system kube-proxy ... Mode: "ipvs" # Default to "" (empty) is iptables mode...

Then restart Kube-Proxy and execute the following command:

kubectl delete pod -n kube-system -l k8s-app=kube-proxy

Finally, we install the ipvsadm tool and verify that the service has been configured by ipvs.

$yum -y install ipvsadm $ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler FLAG-> REMOTEADDRESS: PORT FORWARD WEIGHT ACTIVECONN INACTCONN TCP RR-> MASQ 1 40 0 .

Adjust the clusterCoreDNSconfiguration

For production environments, the host name of the K8S cluster node should be resolved by the DNS domain name, otherwise it may cause some application exceptions such as Metrics – Server because the host name cannot be resolved. Similarly, the domain name assigned by the created Ingress should also be resolved correctly by the DNS domain name system. Otherwise we have to manually modify the /etc/hosts file for each INGRESS configuration that has been statically configured.

For the test environment built in this paper, we did not configure additional DNS domain name system separately, but adjusted the CoreDNS domain name system of K8S cluster, as shown below.

Referring to the article Custom DNS Entries For Kubernetes, we adjust ConfigMap :coredns to add a Zonefile of Zyl. IO, IO and a DNS Wildcard that IO.

$ kubectl -n kube-system edit cm coredns ... Corefile: | .:53 { ... file /etc/coredns/ } | zyl. IO. SOA IN root, zyl. IO. Root. Zyl. IO. 1, 2020061113, 7200, 3600 1 w d CTS IN A *. App. Zyl. IO. IN 300 A

Then, execute the following command to adjust the coredns deployment and mount zone.zyl. IO into the container.

$ kubectl -n kube-system edit deployment coredns
      - configMap:
          defaultMode: 420
          - key: Corefile
            path: Corefile
          - key:
          name: coredns
        name: config-volume

Next, launch a container containing the nslookup, host commands for testing:

$ kubectl run -it --rm --restart=Never --image-pull-policy='IfNotPresent' \ --image=infoblox/dnstools:latest dnstools Dnstools # host CTS ct.zyl. IO has address Dnstools # host z.zyl. IO has address Dnstools # host z.zyl. IO has address

To make the DNS domain name system available outside the cluster, we can map it to the port via hostNetwork:

$ kubectl -n kube-system edit deployment coredns
      hostNetwork: true
$ netstat -an|grep 53|grep udp
udp6       0      0 :::53                   :::*  

Install the add-on (Add-on)

In order to improve the clustering functionality, this chapter describes some basic add-ons that we can configure on demand, such as Dashboard, which is not useful for our testing, and Ingress, persistent storage, which is useful.

throughIngressMake the container accessible outside the cluster

With Ingress we can map the container port out of the cluster, refer to the official documentation Ingress Controllers here we select the Traefik Controller.

Now we use Helm v3 to install Traefik, so we install Helm v3 first:

Wget-o helm. TGZ tar-xf helm. TGZ mv linux-amd64/helm /usr/local/bin/  chmod 755 /usr/local/bin/helm

Refer to the project containous/traefik-helm-chart and install it into a separate traefik namespace by executing the following command. Note: Here we set hostNetwork=true to map the port out of the cluster on the hostNetwork, and here we set the –api.insecure parameter to access Dashboard (which was not enabled by default for security reasons).

helm repo add traefik
cat > /tmp/values.yaml <<EOF
  enabled: true

hostNetwork: true

  type: ClusterIP

  ingressRoute: true

    expose: true

  - "--providers.kubernetesingress.ingressclass=traefik"
  - "--log.level=DEBUG"
  - "--api.insecure"
  - "--serverstransport.insecureskipverify=true"
helm -n traefik install -f /tmp/values.yaml traefik traefik/traefik

The command above instals the following objects in the Traefik namespace, and the ports are mapped outside the cluster via HostNetwork, where 9000 is the Dashboard port, 8000 is the HTTP port, and 8443 is the HTTPS port.

$ kubectl get pod,svc,ingressroute,deployment -n traefik NAME READY STATUS RESTARTS AGE pod/traefik-7474bbc877-m9c52 1/1  Running 0 2m35s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) ... Service/Traefik ClusterIP < NONE > 9000/TCP,80/TCP,443/TCP NAME AGE 2m35s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/traefik 1/1  1 1 2m35s

From port 9000 on the host we open the Dashboard console, which is a neat console for viewing some status information.

Now, we are in the default namespace deployed nginx create a ingress object, through our comments kubernetes. IO/ingress. The class specified by traefik parsing.

kubectl -n default apply -f - <<EOF
kind: Ingress
  name: nginx
  annotations: traefik
  - host:
      - path: /
          serviceName: nginx
          servicePort: 80

Here we set the Ingress host name to IO, then we configure a DNS entry for the host at /etc/hosts to point to the host IP address.

$cat >> /etc/hosts <<EOF IO EOF $curl IO :8000... $cat >> /etc/hosts <<EOF IO :8000... <title>Welcome to nginx! </title> ... $curl 404 page not found

Quickly switch between cluster context and namespace

The -n

parameter is required for each namespace switch. For users who are used to the oc command, this is too convenient. We have developed a tool called ahmetb/kubectx, which can be used to quickly switch between the cluster context and namespace.

There are two ways to install it. One is to install it through the kubectl plug-in package management tool krew. Method 2: Install manually.

  • Method 1Through:krewInstall the plug-in method, seeHere,Let’s install it firstkrew:
# git yum -y install git # krew (set -x; cd "$(mktemp -d)" && curl -fsSLO "{tar.gz,yaml}" && tar zxvf krew.tar.gz && KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" && "$KREW" install --manifest=krew.yaml --archive=krew.tar.gz &&" $krew "update) # configure environment variable:  cat > .bashrc <<'EOF' export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH" EOF

Then we execute Kubectl Krew Search to verify that Krew is running and to check which plugins are available to install, as shown below:

$ kubectl krew search
NAME                            DESCRIPTION                                         INSTALLED
access-matrix                   Show an RBAC access matrix for server resources     no
advise-psp                      Suggests PodSecurityPolicies for cluster.           no
apparmor-manager                Manage AppArmor profiles for cluster.               no

Next, we install CTX (switch context) and ns (switch namespace) using the following command:

kubectl krew install ctx
kubectl krew install ns

After the installation is complete, we can run kubectl as a plugin, as shown below, using ns to quickly switch namespaces.

$kubectl ns # namespace default kube-node-lease kube-public kube-system traefik $kubectl ns traefik # namespace default kube-node-lease kube-public kube-system traefik # Change to traefik namespace Context "kubernetes-admin@kubernetes" modified. Active namespace is "traefik". $kubectl get pod # Name READY STATUS Redstarts AGE nginx-f89759699-254tt 1/1 Running 0 130m $kubectl ns - # # @ @ $kubectl nginx-f89759699-254tt 1/1 Running 0 130m $kubectl ns - #
  • Method 2: Install manually.
git clone ~/.kubectx
COMPDIR=$(pkg-config --variable=completionsdir bash-completion)
ln -sf ~/.kubectx/completion/kubens.bash $COMPDIR/kubens
ln -sf ~/.kubectx/completion/kubectx.bash $COMPDIR/kubectx
cat << FOE >> ~/.bashrc

#kubectx and kubens
export PATH=~/.kubectx:\$PATH

If we combine FZF, we can run the above command in interactive mode, and install FZF by executing the following command.

# installation FZF git clone - the depth 1 ~ /. FZF ~ /. FZF/install

To configure aNFSPersistent storage system

In many cases, our application requires persistent data. For simplicity, we chose to use NFS-Server-Provisioner to build a persistent NFS storage system. This storage system is usually not production-ready, but is sufficient for our testing.

First, we need to install Helm V3 and then execute the following command to add the Helm repository.

$ helm repo add stable
"stable" has been added to your repositories

For the author’s single node K8S cluster, we need to execute the following command to ensure the following (Labels) values that we will use to set the nodeSelector to deploy the application.

% kubectl get node --show-labels
NAME         STATUS   ROLES    AGE   VERSION   LABELS   Ready    master   43h   v1.18.3,...

Prepare a directory /app/ NFS /0 for the NFS service built in this section. For example, the size of the directory prepared by the author is only about 10G.

mkdir -p /app/nfs/0

Create the following file, where we specify that the POD should be scheduled to the cts.zyl. IO host and the number of copies of the POD should be 1. To ensure that the volume provided by NFS is persistent, we need to enable persistence.enabled:true for NFS server. Although prepared directory above 10 g size, but here is still can be set up more than 10 g (gi) in 200, the size of the specified create storage volume for the default storage volume (storageClass. DefaultClass: true), so if the PVC is not explicitly specify storageClass will use this storage classes.

cat > /tmp/values.yaml <<'EOF'
replicaCount: 1

  enabled: true
  storageClass: "-"
  size: 200Gi

  defaultClass: true


Deploy NFS Server into a separate namespace, nfs-server-provisioner, by issuing the following command:

kubectl create namespace nfs-server-provisioner
helm -n nfs-server-provisioner install \
     -f /tmp/values.yaml nfs-server-provisioner stable/nfs-server-provisioner

As shown below, the Pending volume declaration (PVC) required in the deployment manifest is in the Pending state. At this point, we create a persistent volume (PV) for /app/ NFS /0 of the host and then attach it to the PVC.

$ oc get pvc
NAME                            STATUS    VOLUME   CAPACITY   ...
data-nfs-server-provisioner-0   Pending   

$ kubectl -n nfs-server-provisioner create -f - <<EOF
apiVersion: v1
kind: PersistentVolume
  name: data-nfs-server-provisioner-0
    storage: 200Gi
    - ReadWriteOnce
    path: /app/nfs/0
    kind: PersistentVolumeClaim
    name: data-nfs-server-provisioner-0
    namespace: nfs-server-provisioner

After POD is started, we can see that a default storage class named NFS has been created on the cluster, and there are some files in the host directory /app/data/0.

$ oc get pod
NAME                       READY   STATUS    RESTARTS   AGE
nfs-server-provisioner-0   1/1     Running   0          3s
$ oc get storageclass
NAME            PROVISIONER                            RECLAIMPOLICY  ...
nfs (default)   cluster.local/nfs-server-provisioner   Delete         ...
$ ls /app/nfs/0/
ganesha.log  nfs-provisioner.identity  v4old  v4recov  vfs.conf

Finally, let’s create a PVC to verify that this storage can automatically create a persistent volume pv for us. In other words, the storage class is provided automatically, we only state how much storage is needed, and the underlying volume is created automatically for us by the storage system.

$ kubectl -n default create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
  name: test-pvc
  - ReadWriteOnce
      storage: 10Gi

$ kubectl -n default get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ...
test-pvc   Bound    pvc-ae8f8a8b-9700-494f-ad9f-259345918ec4   10Gi       ...

$ kubectl get pv
pvc-ae8f8a8b-...   10Gi       RWO     ... nfs

Next, we execute Kubectl-N default delete PVC test-PVC, and then observe the PV and find that it is automatically released.

The installationK8SThe originalDashboardThe console

This section installs the K8S native dashboard, which is installed by HELM. The installation method is shown here. First, we add the HELM warehouse:

helm repo add kubernetes-dashboard
helm repo add stable

According to the Kubernetes Monitoring Architecture, to display POD CPU and Memory information on the console, we need to install Metrics Server5, but if we choose to use CoreOS/Kube-Prometheus to monitor the cluster, You can disable meters-server because it already includes meters-server functionality.

The kube-prometheus stack includes a resource metrics API server, so the metrics-server addon is not necessary. Ensure the metrics-server addon is disabled on minikube:

Here we choose to deploy Metrics – Server, which only collects core cluster data, such as resource usage. Although its function is not as complete as CoreOS/Kube-Prometheus, the collected data can be used by Dashbord to present resource status. It can also be used for CPU – and memory-based applications with horizontal scaling HPA and vertical scaling VPA.

As shown below, before installing Dashboard, we installed meters-server into a separate kube-monitoring namespace, since our CA certificate is self-signed. So I needed to use –kubelet-insecure-tls to start the meters-server.

kubectl create namespace kube-monitoring
cat > /tmp/values.yaml <<EOF
  - --kubelet-insecure-tls
helm -n kube-monitoring -f /tmp/values.yaml \
     install metrics-server stable/metrics-server

After waiting for POD to start successfully, execute the following command to verify that data is available.

$ kubectl get --raw "/apis/"
$ kubectl top node
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   273m         6%     1518Mi          41%      
$ kubectl top pod
NAME                              CPU(cores)   MEMORY(bytes)   
metrics-server-5b6896f6d7-snnbv   3m           13Mi   

To start installing the Dashboard, we enable the metricsScraper, which will fetch performance data from the metrics server for rendering to the console, while configuring Ingress to use HTTPS is a bit tricky. NodePort is used here to map the ports out of the cluster for convenience.

kubectl create namespace kube-dashboard
helm -n kube-dashboard install dash \
    --set metricsScraper.enabled=true \
    --set service.type=NodePort  \

As shown in the following NodePort model assigned to the host port is 30808, so we can console access by

$oc get SVC NAME TYPE cluster-ip external-ip PORT(S) dash-kubernetes-dashboard NodePort <none> 443:30808/TCP

As shown below, there are two ways to log in to the console. We use Token to log in to the console, and the steps to get Token can be carried out according to the official document Creating Sample User. This paper will not repeat this process.

Here is an overview of a successful login:

  1. CRIO: The OCP/OKD default container runtime is CRIO ↩
  2. Error: Error: The certificate of ‘’ is not trusted ↩
  3. Crictl: This tool acts on available commandsalias docker=crictlTo describe↩
  4. Crio: The default plug-in is stored at ↩ in /etc/cni/net.d/100-crio-bridge.conf
  5. The metrics: it isHeapster, and Heapster has long since been abandoned↩