1. What is persistent storage? For Example: We do system migration, to migrate the original service to K8S, Mysql database system, also to migrate to K8S, we know, K8S is running a Pod, K8S automatic management of Pod, a Pod hung, another Pod will immediately pull up, if the Pod running Mysql hung, Will the data stored in the original Pod still exist? Or will the newly pulled Pod perform data recovery? The answer is: NO! If there is no persistent storage, then brother, you really did from delete to run away! From such a real scenario, we should realize the importance of K8S persistent storage, so to speak, without persistence technology, K8S would have no future! Today, I in-depth and we talk about several solutions to do persistent storage in K8S, and let you play K8S with practical operation! No more words, roll up your sleeves, and the job is done!

PV:

PersistentVolumes is a plug-in that supports multiple data storage servers. PV enables us to persist our data to external servers in a K8S cluster. The following figure shows the data storage service types supported by the PV.

We can see that it can support so many data storage services, so let’s practice: choose NFS as our data storage service.

2.NFS server Setup What is NFS? Network File System (NFS) is one of the file systems supported by FreeBSD. It allows computers on the network to share resources over the TCP/IP network

1) On a centos 7 machine, run the following script to set up the NFS server: vim install_nfs.sh

#! # yum install -y nfs-utils # yum install -y nfs-utils # mkdir -p/NFS /data/ mkdir -p/NFS /data/mysql / NFS /data # Edit export file cat > /etc/exports << EOF/NFS /data *(rw,no_root_squash,sync) EOF # make the configuration valid exportfs -r # Systemctl restart rpcbind && systemctl enable rpcbind systemctl restart NFS && systemctl Rpcinfo -p localhost # showmount -e IPCopy the code

Run the chmod +x install_nfs.sh &&./install_nfs.sh script

2) Install NFS clients on all nodes in the K8S cluster

yum -y install nfs-utils systemctl start nfs && systemctl enable nfs

Now that we have NFS servers, how do we associate PV with NFC? Look at the following code:

Nginx PV apiVersion: v1 kind: PersistentVolume metadata: name: nginx-PV labels: app: nginx001 namespace: Nginx-pv-pvc spec: accessModes: - ReadWriteMany capacity: storage: 2Gi NFS: path: / NFS /data/nginx server: 192.168.29.104Copy the code

PV is also a K8S resource, managed by K8S, so its definition must also be YAML. Nginx-pv; / NFS /data/nginx; / nginx-PV; / nginx-PV; The NFS server IP is 192.168.29.104. Ok, so we have defined a PV. Can we define it and then use it? Let’s consider a question: what if you, as a designer on this block, had to create a PV every time you wanted to use an external storage service? A company not only has development, but also operation and maintenance. If we want to use external storage service, we can directly tell the operation and maintenance and ask the operation and maintenance to prepare some PV in K8S for us to choose and use. Wouldn’t it be convenient? So, the “do-gooder” is thoughtful. Our PV is connected to an external server, so Pod cannot be used. We need to use another resource called PersistentVolumeClaim for SHORT PVC to connect PV, so that our Pod can be used when it is connected to PVC. A producer-consumer relationship similar to message-oriented middleware. PV is the producer, PVC is the consumer.

3) PersistentVolumeClaim

PVC definition:

ApiVersion: v1 kind: PersistentVolumeClaim Metadata: name: nginx-PVC labels: app: nginx001 namespace: nginx-pv-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 2GiCopy the code

Nginx-pvc: nginx-PVC: nginx-PVC: nginx-PVC: Nginx-PVC Then let’s think about a question: PV and PVC have been defined. We don’t see the configuration related to PV binding in the definition of PVC, so how are they bound? In fact, the internal pairing is like this: through the PVC definition of accessModes read and write permissions, and storage definition of 2G memory, PVC will automatically find PV binding in line with these configurations. A PV bound by A PVC cannot be bound by another PVC.

4) PV, PVC actual combat

Let’s also define an Nginx Pod that uses PV, PVC, and NFS for persistent storage. I will write the pv and PVC defined above into a YAML file

vim nginx-pv-pvc.yaml

Nginx PV apiVersion: v1 kind: PersistentVolume metadata: name: nginx-PV labels: app: nginx001 Namespace: nginx-pv-pvc spec: accessModes: - ReadWriteMany capacity: storage: 2Gi nfs: path: /nfs/data/nginx server: ApiVersion: v1 kind: PersistentVolumeClaim Metadata: name: nginx-pvc labels: app: nginx001 namespace: nginx-pv-pvc spec: accessModes: - ReadWriteMany resources: requests: Storage: 2Gi -- # define nginx pod, bind PVC (nginx-PVC) apiVersion: apps/v1 kind: Deployment metadata: name: nginx-wyn001 namespace: nginx-pv-pvc spec: replicas: 3 selector: matchLabels: app: nginx001 template: metadata: name: nginx-wyn001 labels: app: nginx001 spec: containers: - name: nginx-wyn001 image: nginx ports: - containerPort: 80 volumeMounts: - name: nginx-data-volume mountPath: /usr/share/nginx/html resources: requests: cpu: 1000m memory: 200M limits: cpu: 2000m memory: 300M volumes: - name: nginx-data-volume persistentVolumeClaim: claimName: nginx-pvcCopy the code

Create kubectl apply-f nginx-pv-pvc.yaml

We can see that using persistentVolumeClaim: claimName: nginx – PVC defines we want to use my name as nginx – PVC PVC. / NFS /data/nginx /nginx /nginx /data/nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx /nginx Next, start on the master node.

Kubectl get Po,pv, pvc-n nginx-Pv-pvc

The current data directory relationship looks like this:

Container data directory: /usr/share/nginx/html through the PVC, which in turn maps to the/NFS /data/nginx directory of the NFS server through pv

Now let’s create a few files in the/NFS /data/nginx directory on the NFS server and check to see if we already have them in the Nginx Pod

ls /nfs/data/nginx/

Check the pod:

kubectl exec -it -n nginx-pv-pvc nginx-wyn001-5fdbd66695-xs6p2 — ls /usr/share/nginx/html

I deleted the pod and rebuilt it to be deployed on Node2. Then I checked the /usr/share/nginx/html directory of the pod container and found that the data was automatically mounted

Of course, you can also use storage class to achieve data persistence, due to the length of the long, next issue will be an article on this aspect to you