In the process of K8S operation and maintenance, data will be generated for various applications, and data mounting needs PV storage, so we have to manually create PV one by one, which is not reasonable for operation and maintenance. Storageclass is exactly for this purpose, that is, according to the requirements of PVC, combined with third-party applications to automatically create appropriate PV

StorageClass

Speak in front of the PV is static, meaning is what I want to use a PVC have to manually create a PV, we also said that the way to a great extent, and can’t meet our requirements, such as we have an application needs to store the concurrent degree of demand is higher, and the other an application to read and write speed and requirement is higher, Especially for StatefulSet applications, it is not appropriate to simply use static PV. In this case, we need to use dynamic PV, namely StorageClass.

The installation

To automatically create PVS using StorageClass, you need to install the corresponding auto-configuration program. This program, called nfS-client-provisioner, needs to be used in conjunction with NFS as described above, and has two features for creating a PV

  • Automatically created PV toThe {pvcName}-${pvName} naming format is created in the shared data directory on the NFS server
  • And when this PV is recycled, it will be archieved-{pvcName}-${pvName} exists on the NFS server.

Here are three steps to create this autoconfigurator:

1. Create ServiceAccount

Today’s Kubernetes cluster is mostly RBAC based on permission control, so create a ServiceAccount with a set of permissions to bind to the “NFS Provisioner” that you will create later.

nfs-rbac.yaml

kind: ServiceAccount apiVersion: v1 metadata: name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: Default # Replace the Namespace where you want to deploy NFS Provisioner roleRef: kind: ClusterRole Name: nfs-client-provisioner- Runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount Name: nfs-client-provisioner Namespace: default # Replace the namespace to which you want to deploy NFS provisioner roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.ioCopy the code

Create the RBAC

$ kubectl apply -f nfs-rbac.yaml
Copy the code

2. Deploy the Deployment of NFS-client-provisioner

nfs-provisioner-deploy.yaml

kind: Deployment apiVersion: extensions/v1beta1 metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: Template: metadata: Labels: app: nfs-client-provisioner Spec: set update policy to delete and Recreate (default: rolling update) serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: icepear.cn/ifs #-- nfs-provisioner name, which must be the same as the storageclass value - name: NFS_SERVER value: Name: NFS_PATH value: /data/nfs-share #-- The NFS server directory. Volumes: - name: nfs-client-root NFS: server: 192.168.110.15 #-- NFS server ADDRESS Path: /data/nfs-share #-- NFS server directoryCopy the code

Create the NFS Provisioner

$ kubectl apply -f nfs-provisioner-deploy.yaml -n default
Copy the code

3. Create SotageClass

nfs-storage.yaml

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage annotations: Storageclass. Kubernetes. IO/is the default - class: "true" # - is set to the default storageclass provisioner: icepear.cn/ifs #-- Dynamic volume allocator Name, which must be the same as Name set in the "provisioner" variable created above. "True" #-- When set to "false", delete PVC does not retain data,"true" preserves dataCopy the code

4. Test storegeclass

  1. Create test PVC

test-pvc.yaml

kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc spec: storageClassName: Nfs-storage #-- The storageclass name must be the same as that created above. AccessModes: - ReadWriteOnce Resources: requests: storage: 1MiCopy the code

Create a PVC

$ kubectl apply -f test-pvc.yaml -n default
Copy the code

Check whether the PVC STATUS is Bound to PV. Run the Kubectl command to obtain the PVC resources and check whether the STATUS is Bound.

$ kubectl get pvc test-pvc -n default

NAME       STATUS   VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS
test-pvc   Bound    pvc-be0808c2-9957-11e9 1Mi        RWO            nfs-storage
Copy the code
  1. Create test pods and bind the PVC

Create a Pod for test use, specify storage as the PVC created above, create a file in the mounted PVC directory, and then go to the NFS server to see if the file is stored there.

test-pod.yaml

kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: busybox:latest command: - "/ bin/sh args:" - "-c" - "touch/MNT/SUCCESS && exit 0 | | exit 1" # to create a name for "SUCCESS" file volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-pvcCopy the code

Create a Pod

$ kubectl apply -f test-pod.yaml -n default
Copy the code
  1. Check whether files are created on the NFS server
$ cd /data/nfs-share
$ ls
Copy the code