K8s storage is an external storage object for POD, through pod– > PVC –> PV –> physical storage process

storage

ConfigMap

The application reads configuration information from configuration files, command-line arguments, or environment variables. ConfigMap provides a mechanism for injecting configuration information into containers.

You can save individual attributes or create them from a file

[node1@localhost ~]$ cat application.yaml
server:
    port: 8080
    undertow:
        accesslog:
          dir: 
          enabled: false  
          pattern: common   
          prefix: access_log  
          rotate: true 
          suffix: log   

[node1@localhost ~]$ kubectl create configmap app-config --from-file=application.yaml
Copy the code

The –from-file parameter can be used multiple times to specify two different files

[node1@localhost ~]$ kubectl create configmap app-config --from-literal=application.server=aaa --from-literal=application.port=80
[node1@localhost ~]$ kubectl get configmaps app-config -o yaml
Copy the code

–from-literal can be configured with key-value and used multiple times

Used in the pod

  1. Use ConfigMap instead of environment variables, either by using env tag variables one-to-one, or by referencing ConfigMap directly through envFrom
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
  namespace: default
data:
  application.server: aaa
  application.port: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: env-config
  namespace: default
data:
  log_level: INFO
---
apiVersion: v1
kind: Pod
metadata:
  name: app1
spec:
  containers:
    - name: test-container
      image: ice.com/app1:v1
      command: ["bin/sh","-c","env"]
      env:
        - name: APPLICATION_SERVER
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: application.server
        - name: APPLICATION_PORT
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: application.port
      envFrom:
        - configMapRef:
          name: env-config
  restartPolicy: Never
Copy the code

You can run the following command to view the value

[node1@localhost ~]$ kubectl describe cm app-config
Copy the code
  1. You can use configMap to set command line parameters
$(environment variable) for example, $(APPLICATION_SERVER)Copy the code
  1. Through the data volume plug-in
apiVersion: v1
kind: Pod
metadata:
  name: app1
spec:
  containers:
    - name: test-container
      image: ice.com/app1:v1
      command: ["bin/sh","-c","cat /etc/config/application.server"]
      volumeMounts:
        - name: config-volume
          mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        name: app-config
  restartPolicy: Never
Copy the code

Configmap hot update

Suppose you now have a Deployment that references ConfigMap, using the command

[node1@localhost ~]$ kubectl edit configmap app-config
Copy the code

To modify configMap. Note that pod in deployment does not trigger an update, and reloading configMap is needed to make a rolling update available

[node1@localhost ~]$ kubectl patch deployment app1 --patch '{"spec":{"template":{"metadata":{"annotations":{"version/config":"20191101"}}}}}'
Copy the code

This example mainly through the spec. The template. The metadata. The annotations to add version/config, each time by modifying the version/config to trigger the deployment of scroll updates

Note that data in the volume mounted using ConfigMap takes about 10 seconds to update synchronously

Secret

Secret is used to configure sensitive data, such as passwords, tokens, and Secret keys, similar to ConfigMap

Secret comes in three types:

  • Service Account: by k8s automatically created, and hung on/run/secrets/kuberbetes. IO/serviceaccount directory
  • Opaque: base64 encoding format of the Secret, used to store a password, the Secret key, etc., before I can use to base64 encryption, such as: clear echo -n “icepear” | base64
  • Kubernetes. IO/dockerconfigjson: used to store the private docker registry authentication information, can be created by the following commands
kubectl create secret docker-registry ice-harbor --docker-server=reg.icepear.cn --docker-username=admin --docker-password=Harbor123456 [email protected]
Copy the code

The YAML file is then authenticated by imagePullSecrets: -name: ice-harbor

Volume

The life cycle of Volume is the same as that of POD

emptyDir

  • Temporary space, a directory that is empty when created
  • Containers are not deleted when they crash

The sample

apiVersion: v1
kind: Pod
metadata:
  name: app1
spec:
  containers:
    - name: test-container
      image: ice.com/app1:v1
      volumeMounts:
        - name: cache-volume
          mountPath: /etc/config
  volumes:
    - name: cache-volume
      emptyDir:{}
Copy the code

hostPath

HostPath can be understood to be mounted in the same way as volume of docker

  • Use hostPath of /var/lib/docker to access docker internal container
  • Run the cAdvisor in the container using the hostPath of /dev/cgroups

You can specify the Type tag

value behavior
An empty string Does not check
DirectoryDrCreate If nothing exists on the given path, an empty directory with 755 privileges is created
Directory A directory must exist under the given path
FileOrCreate If nothing exists on the given path, an empty file with 644 permissions is created
File The file must exist under the given path
Socket A UNIX socket must exist under the given path
CharDevice Character devices must exist under the given path
BlockDevice Block devices must exist under a given path
  • Because the files on each node are different, a POD with the same configuration might behave differently on different nodes
  • Files or directories created on the underlying host can only be written by root. You need to run the process as root in the privileged container, or change the permissions of the host file or directory for K8S to access and write

The sample

apiVersion: v1
kind: Pod
metadata:
  name: app1
spec:
  containers:
    - name: test-container
      image: ice.com/app1:v1
      volumeMounts:
        - name: path-volume
          mountPath: /etc/config
  volumes:
    - name: path-volume
      hostPath:
        path: /config
        type: Directory
Copy the code

PV-PVC

PV is also implemented in the form of plug-ins. It supports various plug-ins provided by third-party cloud providers as well as local storage plug-ins such as NFS and HostPath

PVC is a resource regulation, which will automatically find suitable PV for storage according to the demand after definition, that is to say, there is such a relationship:

graph TD
Pod-->PVC
PVC-->PV
Copy the code

PV access mode

  • ReadWriteOnce (RWO) Mounting in single-node read/write mode
  • ReadOnlyMany (ROX) Mount multiple nodes in read-only mode
  • ReadWriteMany (RWX) Mounting in read/write mode

Recovery strategy

  • Retain Manually reclaim
  • “Recycle” basically erases
  • Delete The associated storage asset will be deleted
  • Only NFS and HostPath support reclamation policies. AWS EBS, GCE PD, and Azure Disk support deletion policies

state

  • An Available resource has not been bound by any declaration
  • Bound The volume is Bound
  • The Released statement was deleted, but the resource has not yet been redeclared by the cluster
  • Failed Automatic reclamation Failed

Creating an NFS Server

  1. Install NFS on the server and start the service
# yum install -y nfs-common nfs-utils rpcbind # yum install -y nfs-common nfs-utils rpcbind # yum install -y nfs-common nfs-utils rpcbind # yum install -y nfs-common nfs-utils rpcbind enable rpcbind systemctl enable nfsCopy the code
  1. Server Configuration
/data chmod 755 /data chown -r nfsnobody:nfsnobody /data *(rw,sync,all_squash,no_root_squash) # Exportfs -rCopy the code

Format of NFS configuration files:

NFS shared directory NFS client address (see 1, 2……) NFS client ADDRESS 2 (See 1, see 2……)

NFS shared directory NFS client address (see 1, 2……)

  1. Install and start the customer service
# yum install -y nfs-utils rpcbindCopy the code
  1. Client simple to use
Showmount -e NFS server IP address showmount -e 192.168.110.11 # Mount to the local directory mount -t NFS server IP address :/ directory Local directory mount -t NFS 192.168.110.11: : / data/testCopy the code

PV and PVC examples

apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: MountOptions: -hard-nfsvers =4.1 NFS: path: / Temp Server: 192.168.110.15Copy the code

Pv capacity

Kubernetes 1.11 supports expansion after PVC creation

The Storageclass can automatically create PVS based on the PVC, so you need to add parameters when creating the Storageclass

For example,

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glusterfs-sc parameters: clusterid: 075 e35a0ce70274b3ba7f158e77edb2c resturl: http://172.16.9.201:8087 volumetype: replicate: 3 provisioner: Kubernetes. IO /glusterfs reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: trueCopy the code