This is the 9th day of my participation in Gwen Challenge

In the K8S environment, whether it is Deployment or Daemonset or other controller objects, the container or POD that is finally started, pod will generate temporary data during running, and if POD restarts, the data will be lost. This is acceptable for stateless applications, but not for stateful pods, which can cause major failures. So persistent storage is critical. What about storage in K8S

The main concept

There are four important concepts in K8S: Volume, PersistentVolume, PersistentVolumeClaim, and StorageClass

Architecture diagram

Volume

Volume is the most basic way to use it. At the bottom, different types of storage schemes can be used. Lists are supported by default (increasing).

awsElasticBlockStoreazureDiskazureFilecephfscinderconfigMapdownwardAPIemptyDirfc (fibre channel)flocker (deprecated)gcePersistentDiskgitRepo (deprecated)glusterfshostPathiscsilocalnfs
Copy the code

PV persistent volume

A persistent volume is a block of storage in a cluster. It can be configured in advance or created dynamically using storageclass. Pv is independent of pod life cycle

Reclaim policy: Retain - Manually reclaim Delete - Associated storage assets such as AWS EBS, GCE PD, Azure Disk, and OpenStack Cinder volumes will be deleted. Currently, only NFS and HostPath support the reclaim policy. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion policy status: A volume can be in one of the following states: Available -- a free resource has not been Bound by any declaration Bound -- the volume has been Bound by the declaration Released -- the declaration was deleted, but the resource has not been redeclared by the cluster Failed -- the automatic reclamation of the volume FailedCopy the code

PVC durable roll claim

Consume PV, apply volume from PV, storage unit on PV

Example:

apiVersion: v1 kind: PersistentVolume metadata: name: pv004 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs1 nfs: path: /nfsdata1 server: 192.168.78.136 -- apiVersion: V1 kind: PersistentVolumeClaim Metadata: name: pvC004 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nfs1Copy the code

StorageClass storage class

A storage class provides a set of interfaces that can be configured according to policies to dynamically create PVS and serve various underlying storage systems, such as GlusterFS

Each StorageClass contains provisioner, Parameters, and reclaimPolicy fields, which are used when StorageClass needs to dynamically allocate PersistentVolume

The official reference: kubernetes. IO/docs/concep…

Example:

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glusterfs-storageclass namespace: default parameters: Resturl: "http://ip:8080" ClusterID: "XXXXXXXXXX" restAuthEnabled: "true" # SecretNamespace: "default" secretNamespace: "default" volumeType: "replicate:2" provisioner: kubernetes.io/glusterfs reclaimPolicy: Delete allowVolumeExpansion: trueCopy the code

Actual combat deployment

NFS installation

On the centos7

yum install nfs-utils rpcbind -y
Copy the code

configuration

mkdir -p /data/nfs chmod 755 /data/nfs vi /etc/exports - /data/nfs *(rw,sync,no_root_squash,no_all_squash) - systemctl Enable rpcbind systemctl enable NFS -server systemctl start rpcbind systemctl start NFS -server localhostCopy the code

Configuration definition

Ro Read-only ACCESS RW Read/write Access sync All data is written to the share when requested. Async NFS Can respond to requests before data is written. Insecure NFS Is sent over the secure TCP/IP port less than 1024 Wdelay Group write if multiple users want to write to an NFS directory (default) no_wdelay Group write if multiple users want to write to an NFS directory immediately. This setting is not required when async is used. Hide Indicates that a subdirectory is not shared in the NFS shared directory. No_hide Indicates that a subdirectory of the NFS shared directory is subtree_check. If a subdirectory such as /usr/bin is shared, the NFS is forced to check the permission of the parent directory. Do not check the permission of the parent directory all_squash UID and GID of a shared file Mapping Anonymous. It is applicable to a public directory. No_all_squash The UID and GID of the shared file is reserved. (Default) root_squash All requests of user root are mapped as permission of user anonymous. (default) no_root_squas User root has full management access to the root directory Anonuid = XXX UID of an anonymous user in the /etc/passwd file on the NFS server anongid= XXX GID of an anonymous user in the /etc/passwd file on the NFS serverCopy the code

Deploy a ZooKeeper cluster using NFS as the underlying storage

Create PVPVC

PV storage resources include storage capability, access mode, reclamation policy, and storage type

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv1
  labels:
    app: nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nfs:
   path: /data/nfs/pv1
   server: ip
Copy the code

Parameter Description:

  • Capacity: Describes the persistent volume resource and capacity. The storage size is the only resource that can be set or requested.
  • AccessMode: indicates the accessMode, including ReadWriteOnce, ReadOnlyMany, and ReadWriteMany. ReadWriteOnce: indicates that the user has the read and write permission, but can only be mounted once by one node. ReadOnlyMany: indicates that the user has the read-only permission, and can be mounted multiple times by multiple nodes. ReadWriteMany: indicates that the user has the read and write permission, and can be mounted multiple times by multiple nodes
  • PersistentVolumeReclaimPolicy: recycling strategy, that is, to release the strategy of persistent volume, it has the following kinds: Retain: retention data, if need to manually clean up data, to clean up the default strategy; Delete: Deletes PV objects from Kubernetes, as well as storage assets associated with external infrastructure, such as AWS EBS, GCE PD, Azure Disk, or Cinder Volume; Recycle: To Recycle all data in a PV, run rm -rf/PV-volume /*.
  • NFS storage types, including the common Ceph and GlusterFS, are also supported

* Note Currently, only NFS and HostPath support reclamation policies. Basic Retain for persistent applications

Create the PV

pv.yaml
Copy the code

PV status description:

  • Available: indicates that the status is Available and has not been bound by any PVC

  • Bound: Indicates that the PVC is Bound by the PVC

  • Released: The PVC was deleted, but the resource had not been redeclared by the cluster

  • Failed: indicates that automatic reclamation of the PV Failed

PVC mount PV, actual PV applicants, also have access mode, resource request, mount type, etc

PVCS are automatically bound to PVCS when they are created, but only one PVCS can be bound to PVCS (because we access the ReadWriteOnce defined by the pattern), and the capacity cannot be larger than PVCS. PVCS that are read-only and write-only can be mounted by multiple PVCS with the same access mode

If there is a default storage class, the PVC will automatically bind to the storage class, in which case you need selector to select labels

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc1
spec:
  selector:
    matchLabels:
      app: nfs
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
Copy the code

Parameter Description:

  • AccessModes: mainly defines the accessModes that volumes should have

  • Resources: Mainly defines the minimum resources that a volume should have

  • Selector: Defines the label query for the bound volume

  • DataSource: Defines that if the provider has the volume snapshot function, the provider creates a volume and restores data to the volume

  • StorageClassName: The name of the defined storageClass (if PV or SC has one)

  • VolumeMode: Defines the volume type

  • VolumeName: indicates the name link of the PV to be bound

Create a PVC

kubectl apply -f pvc.yaml
kubectl get pv,pvc
Copy the code

PVPVC summary:

  • If there is no PV, K8S will find the corresponding StorageClass, create a PV for it, and then complete the binding with the PVC. That is Bound. If there are none, it is PENGDING

Create StorageClass

The official SIGS organization has a REPO dedicated to configuration and tutorials: github.com/kubernetes-…

Github also has a repo sort, but this SC is based on the host to create, you can also see: github.com/adonaicosta…

Next, deploy the SC

NFS clients are installed on all K8S nodes (similar to other storage services, clients are installed on K8S nodes if directly mounted)

yum -y install nfs-utils
Copy the code

Create the rbac

apiVersion: v1
kind: Namespace
metadata:
  name: dmp-storage-class
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: dmp-storage-class
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: dmp-storage-class
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: dmp-storage-class
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: dmp-storage-class
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: dmp-storage-class
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
Copy the code

Create the NFS client

apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: dmp-storage-class spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: Nfs-client-provisioner imagePullPolicy: IfNotPresent # Replace quay. IO /external_storage/nfs-client-provisioner:latest image: harbor-test.libaogo.cn/dev/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes/ env: - name: PROVISIONER_NAME value: go.libaigo.com/nfs - name: NFS_SERVER value: ip - name: NFS_PATH value: /data/nfs/pv4 volumes: - name: nfs-client-root nfs: server: ip path: /data/nfs/pv4Copy the code

Create the SC

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: dmp-nfs-storage  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: go.libaigo.com/nfs # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain
parameters:
  archiveOnDelete: "false"
Copy the code

Create a PVC

If the storage class matches, the PV is automatically created

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sc-test
  labels:
    app: sc-test
spec:
  accessModes: [ "ReadWriteOnce" ]
  storageClassName: dmp-nfs-storage
  resources:
    requests:
      storage: "5Gi"
Copy the code

Execute the following command or kubectl apply -f./ in the directory

kubectl apply -f rbac.yaml
kubectl apply -f deployment-nfs-client.yaml
kubectl apply -f StorageClass-nfs-yaml
kubectl apply -f pvc.yaml
Copy the code

View the results

Nfs-client deployed in deployment mode

PV in the Rancher storage class

Conclusion:

  • StorageClass supports many storage types

  • The StorageClass implements dynamic PV based on Type for dynamic capacity expansion

  • StorageClass is very flexible and implements different functions according to different types

  • StorageClass implements high data availability and redundancy based on Type