preface

I used OCP/OKD cluster in my previous experimental environment, which was built on the PC Server. Now I need to build a K8S cluster on my Windows workstation. The first thing I think of is to use Codeready Container(CRC) to build an OKD cluster. However, its resource requirements are really too high (4C9G memory), and the virtualization software required by CRC, Hyper-V, is in conflict with Virshbox. In view of this problem, although there is Minikube available, I decided to manually build an original K8S cluster.

The experimental environment is a virtual machine installed in virshbox with the operating system version of CentOS Linux 8, 4 CPUs, 4G memory, and 35G disk. It has two network cards. The network card 1 is connected to the NAT network so as to use the Host network to connect to the Internet, while the network card 2 is connected to the hosting-only network. The K8S API is listening on this network with the host name cts.zyl.io.

The host name Host resources Card 1 The network card 2
cts.zyl.io 4C4GAnd Disk:35G NatNetwork,dhcpTo obtainip Host-onlyNetwork, static configurationIPfor192.168.110.6

Install the container runtime

For example, the author is really not interested in the Docker container runtime 1, so he plans to install the CRI-O lightweight container runtime. Refer to the following figure, the author plans to install K8S 1.18, so he chooses CRI-O 1.18.x.

For the CentOS Linux 8 operating system, install the CRI-O container runtime 2 by executing the following command:

sudo dnf -y install 'dnf-command(copr)' sudo dnf -y copr enable rhcontainerbot/container-selinux sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_8/devel:kubic:libcontainers:stable.re Po sudo curl - L - o/etc /. Yum repos. D/devel: kubic: libcontainers: stable: cri - o: 1.18 repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:1.18/CentOS_8/devel:kubic:libcontainer S :stable:cri-o:1.18. Repo sudo dnf-y install cri-o

Note: The author encountered a bug while currently installing CRIO-O 1.18.1: The conmon path in /etc/crio/crio.conf is /usr/libexec/crio/conmon, but the actual location is /usr/bin/conmon, so restart the crio process by following the following command:

sed 's|/usr/libexec/crio/conmon|/usr/bin/conmon|' -i /etc/crio/crio.conf
systemctl start crio.service

We install Crio’s management tool, crictl3, which Kubeadm will use to pull the image.

wget -O crictl.tgz https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.18.0/crictl-v1.18.0-linux-amd64.tar.gz tar xf crictl.tgz mv crictl /usr/local/bin

IO and k8s.gcr. IO. As shown below, we configured the mirror repository for docker. IO and k8s.gcr. IO. Note: For k8s. GCR. IO configuration mirror warehouse is very important, although we can install – image – repository registry.aliyuncs.com/google_containers to inform kubeinit ali cloud images from the warehouse to download images, Io /pause Mirror is still used when POD is deployed, so in order to avoid any error, configure the Mirror repository here.

cat > /etc/containers/registries.conf <<EOF
unqualified-search-registries = ["docker.io", "quay.io"]

[[registry]]
  location = "docker.io"
  insecure = false
  blocked = false
  mirror-by-digest-only = false
  prefix = ""

  [[registry.mirror]]
    location = "docker.mirrors.ustc.edu.cn"
    insecure = false

[[registry]]
  location = "k8s.gcr.io"
  insecure = false
  blocked = false
  mirror-by-digest-only = false
  prefix = ""

  [[registry.mirror]]
    location = "registry.aliyuncs.com/google_containers"
    insecure = false
EOF

Refer to Network Plugins. When no CNI Network Plugins are configured for Kubelet, the Noop Plugins used rely on a Linux bridge to transfer traffic between containers, or the default Network Plugins for Docker or Crio4 also rely on a bridge to transfer traffic. At this point, load the br_netfilter module and set the net.bridge.bridge-xxx=1 so that iptables can recognize the traffic being transferred by the bridge. Note: If the subsequent SDN we choose does not rely on the Linux bridge for traffic, this is actually negligible, but there are no problems with the configuration.

# these persist across reboots.
cat > /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

# Set up required sysctl params, these persist across reboots.
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sysctl --system

Start the installationkubeadm

With the CRI-O container running in the last section, we’ll install Kubeadm in this section, but before that, we’ll take a look at the official Installing Kubeadm documentation for some important things to do and refine the other steps needed.

  • While it doesn’t make any sense here for the single-node cluster built in this article, I’ll note it here for completeness: Check each node of the cluster with a separate host name,MACAddress andproduct_uuid(cat /sys/class/dmi/id/product_uuid);
  • The host needs to configure a firewall to release some ports. For simplicity, disable the host firewall:
systemctl stop firewalld
systemctl disable firewalld
  • To disable the systemSwap, otherwise,kubeadmComplains.
D99-disable-swap.conf <<EOL vm.swappiness=0 EOL sysctl --system vi /etc/fstab # annotate swap line
  • disableselinuxOr is set topermissive, otherwise,kubeadmComplains.
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

Then, execute the following command to install kubelet, kubeadm, kubectl, and then set kubelet boot from the boot.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

Note:

  • Due to network problem, hereyumSource we choose Ali cloud warehouse, and because of the current noEL8Warehouse, so this place can only chooseEL7;
  • At this timekubeleteIt will keep restarting, but that’s normal behavior, and it’s waiting for us to runkubeadmInitialize or join an existingk8sThe cluster.

Currently Kubeadm can only automatically detect its cgroup driver for the Docker container runtime, whereas for the Crio container runtime we configured, we need to manually configure the cgroup driver for it.

When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the
/var/lib/kubelet/config.yaml file during runtime.

.

The automatic detection of cgroup driver for other container runtimes like CRI-O and containerd is work in progress.

We configure the crio cgroup drive for systemd rather than cgroupfs, as shown in the official documentation need to configure the/var/lib/kubelet/config. The yaml file, in which set cgroupDriver: systemd.

$ cat /etc/crio/crio.conf |grep systemd
cgroup_manager = "systemd"
$ mkdir -p /var/lib/kubelet
$ cat > /var/lib/kubelet/config.yaml <<EOL
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOL
$ systemctl daemon-reload
$ systemctl restart kubelet

However, there is a problem that the cGroupDriver value is not preserved after kubeadm init is executed, which causes kubelet to call the crio container runtime exception, as shown below.

% journalctl -u kubelet -f
... RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = cri-o configured with systemd cgroup manager, but did not receive slice as parent: /kubepods/besteffort/pod6407b05153e245d7313ea88bfb3be36a

For this reason, it is not recommended to configure the parameters in /etc/sysconfig/kubelet, /etc/default/kubelet or /var/lib/kubelet/kubeadm-flag.env, but we do:

cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
EOF

Note: As mentioned in the “Setup ContainerRuntime” section, even if we specify the address of the mirror repository with the –image-repository parameter when we call kubeadm init, the pause image is taken from k8s.gcr. IO and causes an exception. We configure the mirror repository for k8s.gcr. IO at container runtime, but there is another option: adjust the kubelet configuration to specify the Pause mirror.

# /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS = - pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2

Create a single control panel cluster

To create a single control-plane cluster with Kubeadm, execute the following command to create a single control-plane cluster. Note: Among them, the POD network segment set by — POD-network-CIDR should not overlap with the existing network segment, that is, the network segment is currently idle, because the virtual machine has two network cards, and its default route is on the NAT network segment. In order to avoid the cluster API server monitoring the NAT network card, the POD network segment should not overlap with the existing network segment. Specify the segment on which the apiserver is using host-only by using — apiserver-pile-address.

Kubeadm init \ -- apiserver-pile-address =192.168.110.6 \ --pod-network-cidr=10.254.0.0/16

If the command does not report an error, the following message will be displayed on success:

Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one  of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 192.168.110.6:6443 --token nH1erl. D8eh61epm8s4y8oj \ --discovery-token-ca-cert-hash sha256:dce7e5ffc2d3d8662ab48bb1a3eae3fff8e0cbf65784295ac01cf631bbfe5ba1

We perform the following command for the client tools kubectl configuration context, the file/etc/kubernetes/admin. Conf with the whole cluster administrator privileges.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

At this point, we can check the status of the cluster through kubectl, as follows:

# create four namespaces by default:  $ kubectl get namespaces NAME STATUS AGE default Active 54m kube-node-lease Active 54m kube-public Active 54m $kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS: $kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS: $kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-4hfws 1/1 Running 0 55m kube-system coredns-66bff467f8-rm5lp 1/1 Running 0 55m kube-system etcd-cts.zyl.io 1/1 Running 0 56m kube-system kube-apiserver-cts.zyl.io 1/1 Running 0 56m kube-system kube-controller-manager-cts.zyl.io 1/1 Running 0 56m kube-system kube-proxy-zcbjj 1/1 Running 0 55m kube-system kube-scheduler-cts.zyl.io 1/1 Running 0 56m

Note: Etcd, kube – apiserver, kube controller – manager, kube scheduler component to static pattern deployment, the deployment of listing on the host/etc/kubernetes/manifests catalog, Kubelet will automatically load this directory and start POD.

$ ls -l /etc/kubernetes/manifests/
-rw------- 1 root root 1858 Jun  8 20:33 etcd.yaml
-rw------- 1 root root 2709 Jun  8 20:33 kube-apiserver.yaml
-rw------- 1 root root 2565 Jun  8 20:33 kube-controller-manager.yaml
-rw------- 1 root root 1120 Jun  8 20:33 kube-scheduler.yaml

Coredns are deployed using Deployment, while Kube-Proxy is deployed using Daemonset mode:

$ kubectl get ds,deploy -n kube-system
NAME                        DESIRED           NODE SELECTOR            AGE
daemonset.apps/kube-proxy   1         ...     kubernetes.io/os=linux   60m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           60m

Coredns deployed two PODs, which were not necessary for our single-node experiment environment, and set them to 1:

kubectl scale deployment/coredns --replicas=1 -n kube-system

The cluster also lacks SDN network, so we chose Calico, which not only has high performance, but also supports network strategy. Refer to the document QuickStart for Calico on Kubernetes. When the system is configured with NetworkManager to manage the network, to avoid affecting Calico, the following file is configured to tell NM not to manage Calico’s network interface.

cat > /etc/NetworkManager/conf.d/calico.conf <<'EOF' [keyfile] unmanaged-devices=interface-name:cali*; interface-name:tunl* EOF

Next, we execute the following command to deploy Calico. Note: In addition to deploying Calico directly using the deployment manifest, we can also deploy through operator, with the project address tigera/operator.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Wait for its POD to run properly by doing the following:

$ watch kubectl get pods -n kube-system
...
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-76d4774d89-bgt86   1/1     Running   0          15m
calico-node-7gjls                          1/1     Running   0          17m

The control node is not allowed to schedule POD by default, but for our single-node cluster, we must make the control node schedulable POD in order to test, so execute the following command to release this restriction:

kubectl taint nodes --all node-role.kubernetes.io/master-

inK8SCluster Deployment Applications

We have built a most basic K8S cluster, which currently only contains the K8S core control components and Calico SDN network. Although the cluster has only one single node, we have configured Master to schedule POD, so we can deploy the test application at this time.

We are currently in the default namespace and quickly deploy an nginx with the following command:

kubectl create deployment nginx --image=nginx

This NGINX deployment contains only one pod, as shown below:

$ kubectl get pod --show-labels -w
NAME                    READY   STATUS    RESTARTS   AGE   LABELS
nginx-f89759699-lrvzq   1/1     Running   1          15h   app=nginx

Get its IP address and access:

$kubectl describe pod - l app = nginx | grep ^ IP: IP: 10.254.40.9 $curl 10.254.40.9... <title>Welcome to nginx! </title>

Create a service for deployment:

$ kubectl expose deploy nginx --port=80 --target-port=80 $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Nginx ClusterIP 10.106.209.139 <none> 80/TCP 34M

configurationproxyuseipvsmodel

The Kube-proxy has been GA in K8S v1.11 using the IPVS mode. It has higher performance than iptables, but that does not mean iptables are useless. In fact, ipvs will work in conjunction with iptables.

If we do not perform kubeadm init to initialize the cluster, we can set up kube-proxy to use IPVS mode as follows:

cat > config.yml <<'EOF'
kubeProxy:
  config:
    featureGates:
      SupportIPVSProxyMode: true
    mode: ipvs
EOF
kube init --config config.yml

For the K8S cluster that is already in use, we execute the following command on each node to load the IPVS module. Note: I found that the following module has been loaded automatically in my experimental environment. If so, there is no need to deal with it.

Modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack # Make sure that cat > is loaded on startup /etc/modules-load.d/ipvs.conf <<'EOF' ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack EOF

Update configMap to adjust mode to ipvs:

$ kubectl edit cm -n kube-system kube-proxy ... Mode: "ipvs" # Default to "" (empty) is iptables mode...

Then restart Kube-Proxy and execute the following command:

kubectl delete pod -n kube-system -l k8s-app=kube-proxy

Finally, we install the ipvsadm tool and verify that the service has been configured by ipvs.

$yum -y install ipvsadm $ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler FLAG-> REMOTEADDRESS: PORT FORWARD WEIGHT ACTIVECONN INACTCONN TCP 10.96.0.1:443 RR-> 192.168.110.6:6443 MASQ 1 40 0 .

Adjust the clusterCoreDNSconfiguration

For production environments, the host name of the K8S cluster node should be resolved by the DNS domain name, otherwise it may cause some application exceptions such as Metrics – Server because the host name cannot be resolved. Similarly, the domain name assigned by the created Ingress should also be resolved correctly by the DNS domain name system. Otherwise we have to manually modify the /etc/hosts file for each INGRESS configuration that has been statically configured.

For the test environment built in this paper, we did not configure additional DNS domain name system separately, but adjusted the CoreDNS domain name system of K8S cluster, as shown below.

Referring to the article Custom DNS Entries For Kubernetes, we adjust ConfigMap :coredns to add a Zonefile of Zyl. IO, IO and a DNS Wildcard that specifies.app.zyl. IO.

$ kubectl -n kube-system edit cm coredns ... Corefile: | .:53 { ... file /etc/coredns/zone.zyl.io zyl.io } zone.zyl.io: | zyl. IO. SOA IN root, zyl. IO. Root. Zyl. IO. 1, 2020061113, 7200, 3600 1 w d CTS IN A 192.168.110.6 *. App. Zyl. IO. IN 300 A 192.168.110.6

Then, execute the following command to adjust the coredns deployment and mount zone.zyl. IO into the container.

$ kubectl -n kube-system edit deployment coredns
...
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: Corefile
            path: Corefile
          - key: zone.zyl.io
            path: zone.zyl.io
          name: coredns
        name: config-volume

Next, launch a container containing the nslookup, host commands for testing:

$ kubectl run -it --rm --restart=Never --image-pull-policy='IfNotPresent' \ --image=infoblox/dnstools:latest dnstools Dnstools # host CTS ct.zyl. IO has address 192.168.110.6 Dnstools # host z.zyl. IO has address 192.168.110.6 Dnstools # host z.zyl. IO has address 192.168.110.6 192.168.110.6

To make the DNS domain name system available outside the cluster, we can map it to the port via hostNetwork:

$ kubectl -n kube-system edit deployment coredns
...
      hostNetwork: true
...
$ netstat -an|grep 53|grep udp
udp6       0      0 :::53                   :::*  

Install the add-on (Add-on)

In order to improve the clustering functionality, this chapter describes some basic add-ons that we can configure on demand, such as Dashboard, which is not useful for our testing, and Ingress, persistent storage, which is useful.

throughIngressMake the container accessible outside the cluster

With Ingress we can map the container port out of the cluster, refer to the official documentation Ingress Controllers here we select the Traefik Controller.

Now we use Helm v3 to install Traefik, so we install Helm v3 first:

Wget-o helm. TGZ https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz tar-xf helm. TGZ mv linux-amd64/helm /usr/local/bin/  chmod 755 /usr/local/bin/helm

Refer to the project containous/traefik-helm-chart and install it into a separate traefik namespace by executing the following command. Note: Here we set hostNetwork=true to map the port out of the cluster on the hostNetwork, and here we set the –api.insecure parameter to access Dashboard (which was not enabled by default for security reasons).

helm repo add traefik https://containous.github.io/traefik-helm-chart
cat > /tmp/values.yaml <<EOF
podDisruptionBudget:
  enabled: true

hostNetwork: true

service:
  type: ClusterIP

dashboard:
  ingressRoute: true

ports:
  traefik:
    expose: true

additionalArguments:
  - "--providers.kubernetesingress.ingressclass=traefik"
  - "--log.level=DEBUG"
  - "--api.insecure"
  - "--serverstransport.insecureskipverify=true"
EOF
helm -n traefik install -f /tmp/values.yaml traefik traefik/traefik

The command above instals the following objects in the Traefik namespace, and the ports are mapped outside the cluster via HostNetwork, where 9000 is the Dashboard port, 8000 is the HTTP port, and 8443 is the HTTPS port.

$ kubectl get pod,svc,ingressroute,deployment -n traefik NAME READY STATUS RESTARTS AGE pod/traefik-7474bbc877-m9c52 1/1  Running 0 2m35s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) ... Service/Traefik ClusterIP 10.111.109.186 < NONE > 9000/TCP,80/TCP,443/TCP NAME AGE ingressroute.traefik.containo.us/traefik-dashboard 2m35s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/traefik 1/1  1 1 2m35s

From port 9000 on the host we open the Dashboard console, which is a neat console for viewing some status information.

Now, we are in the default namespace deployed nginx create a ingress object, through our comments kubernetes. IO/ingress. The class specified by traefik parsing.

kubectl -n default apply -f - <<EOF
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: nginx.app.zyl.io
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80
EOF

Here we set the Ingress host name to nginx.app.zyl. IO, then we configure a DNS entry for the host at /etc/hosts to point to the host IP address.

$cat >> /etc/hosts <<EOF 192.168.110.6 nginx.app.zyl. IO EOF $curl nginx.app.zyl. IO :8000... $cat >> /etc/hosts <<EOF 192.168.110.6 nginx.app.zyl. IO :8000... <title>Welcome to nginx! </title> ... $curl 192.168.110.6:8000 404 page not found

Quickly switch between cluster context and namespace

The -n

parameter is required for each namespace switch. For users who are used to the oc command, this is too convenient. We have developed a tool called ahmetb/kubectx, which can be used to quickly switch between the cluster context and namespace.

There are two ways to install it. One is to install it through the kubectl plug-in package management tool krew. Method 2: Install manually.

  • Method 1Through:krewInstall the plug-in method, seeHere,Let’s install it firstkrew:
# git yum -y install git # krew (set -x; cd "$(mktemp -d)" && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.{tar.gz,yaml}" && tar zxvf krew.tar.gz && KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" && "$KREW" install --manifest=krew.yaml --archive=krew.tar.gz &&" $krew "update) # configure environment variable:  cat > .bashrc <<'EOF' export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH" EOF

Then we execute Kubectl Krew Search to verify that Krew is running and to check which plugins are available to install, as shown below:

$ kubectl krew search
NAME                            DESCRIPTION                                         INSTALLED
access-matrix                   Show an RBAC access matrix for server resources     no
advise-psp                      Suggests PodSecurityPolicies for cluster.           no
apparmor-manager                Manage AppArmor profiles for cluster.               no
...

Next, we install CTX (switch context) and ns (switch namespace) using the following command:

kubectl krew install ctx
kubectl krew install ns

After the installation is complete, we can run kubectl as a plugin, as shown below, using ns to quickly switch namespaces.

$kubectl ns # namespace default kube-node-lease kube-public kube-system traefik $kubectl ns traefik # namespace default kube-node-lease kube-public kube-system traefik # Change to traefik namespace Context "kubernetes-admin@kubernetes" modified. Active namespace is "traefik". $kubectl get pod # Name READY STATUS Redstarts AGE nginx-f89759699-254tt 1/1 Running 0 130m $kubectl ns - # # @ @ $kubectl nginx-f89759699-254tt 1/1 Running 0 130m $kubectl ns - #
  • Method 2: Install manually.
git clone https://github.com/ahmetb/kubectx.git ~/.kubectx
COMPDIR=$(pkg-config --variable=completionsdir bash-completion)
ln -sf ~/.kubectx/completion/kubens.bash $COMPDIR/kubens
ln -sf ~/.kubectx/completion/kubectx.bash $COMPDIR/kubectx
cat << FOE >> ~/.bashrc


#kubectx and kubens
export PATH=~/.kubectx:\$PATH
FOE

If we combine FZF, we can run the above command in interactive mode, and install FZF by executing the following command.

# installation FZF git clone - the depth 1 https://github.com/junegunn/fzf.git ~ /. FZF ~ /. FZF/install

To configure aNFSPersistent storage system

In many cases, our application requires persistent data. For simplicity, we chose to use NFS-Server-Provisioner to build a persistent NFS storage system. This storage system is usually not production-ready, but is sufficient for our testing.

First, we need to install Helm V3 and then execute the following command to add the Helm repository.

$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories

For the author’s single node K8S cluster, we need to execute the following command to ensure the following (Labels) values that we will use to set the nodeSelector to deploy the application.

% kubectl get node --show-labels
NAME         STATUS   ROLES    AGE   VERSION   LABELS
cts.zyl.io   Ready    master   43h   v1.18.3   kubernetes.io/hostname=cts.zyl.io,...

Prepare a directory /app/ NFS /0 for the NFS service built in this section. For example, the size of the directory prepared by the author is only about 10G.

mkdir -p /app/nfs/0

Create the following file, where we specify that the POD should be scheduled to the cts.zyl. IO host and the number of copies of the POD should be 1. To ensure that the volume provided by NFS is persistent, we need to enable persistence.enabled:true for NFS server. Although prepared directory above 10 g size, but here is still can be set up more than 10 g (gi) in 200, the size of the specified create storage volume for the default storage volume (storageClass. DefaultClass: true), so if the PVC is not explicitly specify storageClass will use this storage classes.

cat > /tmp/values.yaml <<'EOF'
replicaCount: 1

persistence:
  enabled: true
  storageClass: "-"
  size: 200Gi

storageClass:
  defaultClass: true

nodeSelector:
  kubernetes.io/hostname: cts.zyl.io
EOF

Deploy NFS Server into a separate namespace, nfs-server-provisioner, by issuing the following command:

kubectl create namespace nfs-server-provisioner
helm -n nfs-server-provisioner install \
     -f /tmp/values.yaml nfs-server-provisioner stable/nfs-server-provisioner

As shown below, the Pending volume declaration (PVC) required in the deployment manifest is in the Pending state. At this point, we create a persistent volume (PV) for /app/ NFS /0 of the host and then attach it to the PVC.

$ oc get pvc
NAME                            STATUS    VOLUME   CAPACITY   ...
data-nfs-server-provisioner-0   Pending   

$ kubectl -n nfs-server-provisioner create -f - <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-nfs-server-provisioner-0
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /app/nfs/0
  claimRef:
    kind: PersistentVolumeClaim
    name: data-nfs-server-provisioner-0
    namespace: nfs-server-provisioner
EOF

After POD is started, we can see that a default storage class named NFS has been created on the cluster, and there are some files in the host directory /app/data/0.

$ oc get pod
NAME                       READY   STATUS    RESTARTS   AGE
nfs-server-provisioner-0   1/1     Running   0          3s
$ oc get storageclass
NAME            PROVISIONER                            RECLAIMPOLICY  ...
nfs (default)   cluster.local/nfs-server-provisioner   Delete         ...
$ ls /app/nfs/0/
ganesha.log  nfs-provisioner.identity  v4old  v4recov  vfs.conf

Finally, let’s create a PVC to verify that this storage can automatically create a persistent volume pv for us. In other words, the storage class is provided automatically, we only state how much storage is needed, and the underlying volume is created automatically for us by the storage system.

$ kubectl -n default create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
EOF

$ kubectl -n default get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ...
test-pvc   Bound    pvc-ae8f8a8b-9700-494f-ad9f-259345918ec4   10Gi       ...

$ kubectl get pv
NAME               CAPACITY   ACCESS  ... STORAGECLASS
pvc-ae8f8a8b-...   10Gi       RWO     ... nfs

Next, we execute Kubectl-N default delete PVC test-PVC, and then observe the PV and find that it is automatically released.

The installationK8SThe originalDashboardThe console

This section installs the K8S native dashboard, which is installed by HELM. The installation method is shown here. First, we add the HELM warehouse:

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm repo add stable https://kubernetes-charts.storage.googleapis.com

According to the Kubernetes Monitoring Architecture, to display POD CPU and Memory information on the console, we need to install Metrics Server5, but if we choose to use CoreOS/Kube-Prometheus to monitor the cluster, You can disable meters-server because it already includes meters-server functionality.

The kube-prometheus stack includes a resource metrics API server, so the metrics-server addon is not necessary. Ensure the metrics-server addon is disabled on minikube:

Here we choose to deploy Metrics – Server, which only collects core cluster data, such as resource usage. Although its function is not as complete as CoreOS/Kube-Prometheus, the collected data can be used by Dashbord to present resource status. It can also be used for CPU – and memory-based applications with horizontal scaling HPA and vertical scaling VPA.

As shown below, before installing Dashboard, we installed meters-server into a separate kube-monitoring namespace, since our CA certificate is self-signed. So I needed to use –kubelet-insecure-tls to start the meters-server.

kubectl create namespace kube-monitoring
cat > /tmp/values.yaml <<EOF
args:
  - --kubelet-insecure-tls
EOF
helm -n kube-monitoring -f /tmp/values.yaml \
     install metrics-server stable/metrics-server

After waiting for POD to start successfully, execute the following command to verify that data is available.

$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
$ kubectl top node
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
cts.zyl.io   273m         6%     1518Mi          41%      
$ kubectl top pod
NAME                              CPU(cores)   MEMORY(bytes)   
metrics-server-5b6896f6d7-snnbv   3m           13Mi   

To start installing the Dashboard, we enable the metricsScraper, which will fetch performance data from the metrics server for rendering to the console, while configuring Ingress to use HTTPS is a bit tricky. NodePort is used here to map the ports out of the cluster for convenience.

kubectl create namespace kube-dashboard
helm -n kube-dashboard install dash \
    --set metricsScraper.enabled=true \
    --set service.type=NodePort  \
    kubernetes-dashboard/kubernetes-dashboard

As shown in the following NodePort model assigned to the host port is 30808, so we can console access by https://192.168.110.6:30808.

$oc get SVC NAME TYPE cluster-ip external-ip PORT(S) dash-kubernetes-dashboard NodePort 10.108.149.116 <none> 443:30808/TCP

As shown below, there are two ways to log in to the console. We use Token to log in to the console, and the steps to get Token can be carried out according to the official document Creating Sample User. This paper will not repeat this process.

Here is an overview of a successful login:


  1. CRIO: The OCP/OKD default container runtime is CRIO ↩
  2. Error: Error: The certificate of ‘download.opensuse.org’ is not trusted ↩
  3. Crictl: This tool acts on available commandsalias docker=crictlTo describe↩
  4. Crio: The default plug-in is stored at ↩ in /etc/cni/net.d/100-crio-bridge.conf
  5. The metrics: it isHeapster, and Heapster has long since been abandoned↩