Hello everyone, I am xiao CAI, a desire to do CAI Not CAI xiao CAI in the Internet industry. Soft but just, soft praise, white piao just! Ghost ~ remember to give me a three – even oh!

This paper mainly introduces the use of data storage in K8s

Refer to it if necessary

If it is helpful, do not forget the Sunday

Wechat public number has been opened, xiao CAI Liang, did not pay attention to the students remember to pay attention to oh!

We have completed the Namespace, Pod, PodController several resources, has passed the most of the oh ~ this article we continue to understand how to store data in K8S!

Kubernetes minimal control unit, containers are run in a POD, a POD can have one or more containers

The pod controller is frequently created and destroyed when a POD goes wrong. Each POD is independent, so the data stored in the container is wiped out. This is a fatal blow. K8s not only supports data mount, but also supports the function is quite powerful. Without further ado, we will enter the data world

Data is stored

K8s has a concept of Volume. Volumn is a shared directory in Pod that can be accessed by multiple containers. The Volume of K8S is defined on Pod and then mounted to a specific file directory by multiple containers in a Pod. K8s uses Volume to realize data sharing and persistent storage among different containers in a POD. The life cycle of Volume is not related to the life cycle of a single container in POD. When the container terminates or restarts, data in the Volume will not be lost.

Volume supports the following common types:

In addition to the above listed stores, there are gcePersistentDisk, awsElasticBlockStore, azureFileVolume, and azureDisk, but we won’t know too much about them because we don’t use them very often. Let’s take a closer look at how each store is used!

First, basic storage

1) EmptyDir

This is the most basic Volume type. An EmptyDir is an empty directory on Host.

Concept:

It is created when a Pod is assigned to a Node, the initial content is empty, and it automatically assigns a directory to the host without specifying a corresponding directory file.

Of concern:

When Pod is destroyed, data in EmptyDir is also permanently deleted!

Use:

  1. Temporary space, such as a temporary directory for Web servers to write logs or TMP files.
  2. Used as a shared directory between containers (a directory where one container needs to fetch data from another container)
Practical:

Let’s take Nginx as an example and prepare a list of resources

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: cbuc-test
  labels:
    app: nginx-pod
spec:
  containers:
  - name: nginx
    image: Nginx: 1.14 - alpine
    ports:
   	- containerPort: 80
   	volumeMounts:	# mount nginx-log to nginx /var/log/nginx
   	- name: nginx-log
   	  mountPath: /var/log/nginx
  volumes:	# declare volume here
  - name: nginx-log
    emptyDir: {}
Copy the code

We can then look at the location of the emptyDir storage volume on the host machine after we create it. By default, the directory for volume is /var/lib/kubelet/ Pods / /volumes/kubernetes. IO ~< volume type >/< volume name >.

2) HostPath

Concept:

HostPath is used to mount an actual directory on Node to pod for container use. This advantage is that the data in this directory will still exist after POD is destroyed.

Practical:

Let’s use nginx as an example to prepare a list of resources:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: cbuc-test
  labels:
    app: nginx-pod
spec:
  containers:
  - name: nginx
    image: Nginx: 1.14 - alpine
    ports:
    - containerPort: 80
    volumeMounts:
    - name: nginx-log
      mountPath: /var/log/nginx
  volumes:
  - name: nginx-log
    hostPath:           The host directory is /data/nginx/log
      path: /data/nginx/log
      type: DirectoryOrCreate   # create type
Copy the code

The spec. Volumes. HostPath. Type (create a type) :

  • DirectoryOrCreate: If the directory exists, use it; if it does not, create it and use it
  • Directory: The Directory must exist
  • FileOrCreate: If a file exists, it is used; if a file does not exist, it is created and used
  • File: The File must exist
  • Socket: A Unix Socket must exist
  • CharDevice: The character device must exist
  • BlockDevice: BlockDevice must exist

/data/nginx/log = /data/nginx/log = /data/nginx/log = /data/nginx/log

3) the NFS

HostPath is a frequently used storage method in our daily life, which can meet the basic usage scenarios. EmptyDir is for pods, and if the pod is destroyed, the modified data will be lost. HostPath has been modified to make storage for Nodes, but if Node goes down, the data will still be lost. In this case, an independent network storage system is required. NFS and CIFS are commonly used

Concept:

NFS is a network storage system. You can set up an NFS server and connect the storage in the Pod directly to the NFS system. In this way, no matter how the Pod is moved from Node to Node, as long as the Node and the NFS server are properly connected, the data will be fine.

Practical:

If you need an NFS server, you definitely need to build one yourself. We selected the master node to install the NFS server

#Installing the NFS Server
yum install -y nfs-utils
#Preparing a shared directory
mkdir -p /data/nfs/nginx
#Expose the shared directory with read and write permission to all hosts in the 192.168.108.0/24 network segment
vim /etc/exports
#Add the following/ data/NFS/nginx 192.168.108.0/24 (rw, no_root_squash)#Starting the NFS Server
systemctl start nfs
Copy the code

Then we also install NFS on each node to drive the NFS device

yum install -y nfs-utils
Copy the code

With the above preparations in place we can prepare the resource manifest file:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: cbuc-test
  labels:
    app: nginx
spec:
  containers:
  - name: nginx-pod
    image: Nginx: 1.14 - alpine
    ports:
    - containerPort: 80
    volumeMounts:
    - name: nginx-log
      mountPath: /var/log/nginx
    volumes:
    - name: nginx-log
      nfs:
        server: 192.168108.100. 	# NFS server address (master address
        path: /data/nfs/nginx		# Shared file path
Copy the code

After creating the POD, we can go to the /data/ NFS directory to view the two log files

Two, advanced storage

Managing storage is an obvious problem in managing computing, and this part needs to abstract away the details of how to provide storage based on how it is consumed. Two new API resources are introduced: PersistenVolume and PersistentVolumeClaim

PersistenVolume (PV) is an abstraction of the underlying shared storage. PVS are typically created and configured by k8S administrators and are associated with the underlying shared storage technology and the objects that are shared through plug-ins.

PersistentVolumeClaim (PVC) PersistentVolumeClaim (PVC) PersistentVolumeClaim (PVC) PersistentVolumeClaim (PVC) PersistentVolumeClaim (PVC)

1) PV

PV is a segment of network storage configured by the administrator in a cluster. It is also a resource in the cluster. The resource list template is as follows:

apiVersion: v1
kind: PersistentVolume
meatadata:
  name: pv
spec:
  nfs: 					Storage type: CIFS, GlusterFS
  capacity:				# Storage space Settings
    storage: 2Gi
  accessModes:  		Access mode
  storageClassName:  	# Storage category
  persistentVolumeReclaimPolicy:    # Recycle strategy
Copy the code

Each attribute is really long and hard to remember, so let’s take a look at what each attribute means first:

  • Storage type

The type of underlying storage. The K8S supports a variety of storage types, with different configurations for each type

  • Storage capacity

Currently, only storage space can be configured. In the future, IOPS and throughput parameters may be added

  • accessModes

This parameter describes the access permission of an application to storage resources. The access permission is as follows:

  1. ReadWriteOnce (RWO) : indicates the read and write permission, which can only be mounted by a single node
  2. ReadOnlyMany (ROM) : read-only permission that can be mounted by multiple nodes
  3. ReadWriteMany (RWM) : indicates the read and write permission that can be mounted by multiple nodes
  • Store category

PV can specify a storage category using the storageClassName parameter

  1. PVS with a specific class can only be bound to PVCS that request that class
  2. PVS without a class can only be bound to PVCS that do not request any class
  • Recovery strategy (persistentVolumeReclaimPolicy)

When a PV is no longer in use, it needs to be processed (different storage types support different policies). The following recycling policies are available:

  1. Retain: Retain data. The administrator needs to manually clear data

  2. Recycle: Deletes data in a PV. The effect is equivalent to rm-RF

  3. Delete: Deletes a volume from a back-end storage device connected to a PV. This operation is common in the storage services of cloud service providers

Life cycle:

The life cycle of a PV may be in four different stages:

  • Available: indicates that the status is Available and has not been bound by any PVC
  • Bound: indicates that the PV is Bound by the PVC
  • Released: indicates that the PVC has been deleted, but the resource has not been redeclared by the cluster
  • Failed: indicates that automatic reclamation of the PV Failed
Practical:

We already know about NFS storage servers, so we’re still using NFS servers for the underlying storage. First we need to create a PV, which corresponds to a path in NFS that needs to be exposed.

#Create a directory
mkdir /data/pv1 -pv

#Expose paths to NFSVim/etc/exports/data/pv1 192.168.108.0/24 (rw, no_root_squash)Copy the code

After completing the above steps we need to create a PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
  namespace: cbuc-test
  labels:
    app: pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/pv1
    server: 192.168108.100.
Copy the code

After creation, we can see PV:

2) PVC

PVC is a resource application, which is used to declare requirements for storage space, access mode, and storage type. The template of the resource list is as follows:

apiVersion: v1
kind: persistentVolumeClaim
metadata:
  name: pvc
  namespace: cbuc-test
  labels:
    app: pvc
spec:
  accessModes:  		Access mode
  selector:				# Use the tag to select PV
  storageClassName:		# Storage category
  resources:				# request space
    request:
      storage: 1Gi
Copy the code

Many of these properties we already know about in PV, so we’ll go through them briefly here

  • accessModes

This parameter describes the permission of an application to access storage resources

  • Select conditions

PVS that already exist in the system are filtered and managed by setting Labels Selector

  • Resource category (storageClassName)

PVCS can be defined to specify the type of back-end storage required. Only PVCS with this class can be selected by the system

  • Resource Requests

Describes requests for storage resources

Practical:

Prepare a resource list template for PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc01
  namespace: cbuc-test
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
Copy the code

After creation, we first check whether the PVC is created successfully

Then check whether pv has been bound by PVC

3) Actual use

We have successfully created PV and PVC, but we have not explained how to use them, so we are going to prepare a list of PODS:

apiVersion: v1
kind: Pod
metadata:
  name: pod-pvc
  namespace: cbuc-test
spec:
  containers:
  - name: nginx01
    image: Nginx: 1.14 - alpine
    volumeMounts:
    - name: test-pv
      mountPath: /var/log/nginx
  volumes:
  - name: test-pv
    persistentVolumeClaim:
      claimName: pvc01
      readOnly: true
Copy the code

4) Life cycle

Everything has a life cycle, PV and PVC are no exception, the life cycle is as follows:

  • Resource supply

The administrator manually creates the underlying storage and PV

  • Resource binding

The user creates the PVC, k8S is responsible for finding the PV according to the DECLARATION of the PVC, and binding.

  1. If found, the binding is successful and the user’s application can use the PVC
  2. If not, the PVC will remain in a Pending state indefinitely until the system administrator creates a PV that meets its requirements

Once a PV is bound to a PVC, it is exclusively owned by that PVC and can no longer be bound to other PVCS

  • Resource use

Users can use PVC in Pod just like volume

  • Release resources

After the storage resources are used up, you can delete the PVC to release the PV. The PV bound to the PVC is marked as released, but cannot be immediately bound to other PVCS. Data written to the PVC may remain on the storage device, and the PV can be used again only after being cleared

  • Resource recovery

The K8S reclaims resources based on the reclamation policy set by the PV

The life cycle of PV and PVC is listed above, not so much the life cycle, but the use process of PV and PVC!

3. Configure storage

Configuration stores, as the name implies, are used to store configurations. There are two types of configuration stores: ConfigMap and Secret

1) ConfigMap

ConfigMap is a special storage volume used to store configuration information. The resource list template is as follows:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cmp
  namespace: cbuc-test
data:
  info:
    username:cbuc
    sex:male
Copy the code

It is easy to use, with less specs and more data.info, to store the configuration files you want to configure under info as key: value

You can run the kubectl create -f configmap. yaml command to create a configMap

To do this, we need to create a Pod:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
  namespace: cbuc-test
spec:
  containers:
  - name: nginx
    image: 1.14 apline nginx:
    volumeMounts:		Mount configMap to the directory
    - name: config
      mountPath: /var/configMap/config
  volumes:
  - name: config
    configMap:
      name: cmp  Above we create the name of configMap
Copy the code

Kubectl create -f pod-cmp.yaml create a test pod, then check the pod configuration file:

2) the Secret

In K8S, an object very similar to ConfigMap exists, called Secret objects. It is mainly used to store sensitive information, such as passwords, secret keys, and certificates.

We first base64 encrypt the data we want to configure:

#Encrypted user name
[root@master test]# echo -n 'cbuc' | base64
Y2J1Yw==
#Encrypted password
[root@master test]# echo -n '123456' | base64
MTIzNDU2
Copy the code

Then prepare the Secret resource manifest file

apiVersion: v1
kind: Secret
metadata:
  name: secret
  namespace: cbuc-test
type: Opaque    # indicates base64 encoded Secret
data:
  username: Y2J1Yw==
  password: MTIzNDU2
Copy the code

Create secret with kubectl create -f secret.yaml, and then prepare a list of Pod resources:

apiVersion: v1
kind: Pod
metadata:
  name: pod-secret
  namespace: cbuc-test
spec:
  containers:
  - name: nginx
    image: 1.14 apline nginx:
    volumeMounts:
    - name: config
      mountPath: /var/secret/config
  volumes:
  - name: config
    secret:
      secretName: secret
Copy the code

After creation, enter pod to view the configuration file, and you can find that the information in the configuration file has been decoded

END

This article introduces the data storage of K8S, which is short. See you next time (Server and Ingress). The road is long, small dishes with you to seek ~

Today you work harder, tomorrow you will be able to say less words!

I am xiao CAI, a man who studies with you. 💋

Wechat public number has been opened, xiao CAI Liang, did not pay attention to the students remember to pay attention to oh!