The content is from the official Longhorn 1.1.2 English technical manual.

A series of

  • What’s a Longhorn?
  • Longhorn enterprise cloud native container distributed storage solution design architecture and concepts
  • Longhorn Enterprise Cloud Native Container Distributed storage – Deployment
  • Longhorn Enterprise Cloud Native Container Distributed Storage – Volume and Node
  • Longhorn, enterprise cloud native Container Distributed storage -K8S resource configuration example
  • Longhorn, Enterprise Cloud Native Container Distributed Storage – Monitoring (Prometheus+AlertManager+Grafana)

directory

  • Creating a Snapshot
  • Periodic (Recurring) Snapshots and backups
    • useLonghorn UISetting a Periodic Snapshot
    • useStorageClassSet up theRecurring Jobs
    • Allowed when volumes are separatedRecurring Job
  • Disaster volume
    • Creating a Dr (DRVolume)
  • Setting a Backup Target
    • Set up theAWS S3Backup storage
    • Example Set the local test backup storage
    • Use self-signingSSLCertificate forS3communication
    • forS3Compatible backup storage enabledvirtual-hosted-styleaccess
    • NFSBackup storage
  • Create backup
  • Recovering from backup
  • forKubernetes StatefulSetsTo restore volume
  • Enable it on the clusterCSISnapshots support
    • Add a defaultVolumeSnapshotClass
    • If you areAir GapEnvironment from the previousLonghornVersion update
    • If you haveKubernetesDistributions are not bundledSnapshot Controller
  • throughCSICreate backup
  • throughCSI MechanismCreate backup
    • CSI MechanismThe working principle of
    • Check the backup
    • VolumeSnapshotThe sample
  • throughCSIRestore the backup
  • throughVolumeSnapshotObject Recovery and Backup
    • Restore is not associatedVolumeSnapshotThe backup

Creating a Snapshot

Snapshot is the state of the Kubernetes Volume at any given point in time.

To create a snapshot of an existing cluster,

  1. inLonghorn UIOn the top navigation bar, clickVolume.
  2. Click the name of the volume for which you want to create a snapshot. This results in a volume details page.
  3. Click the Take Snapshot button.

After a snapshot is created, you will see it in the snapshot list of the Volume before the Volume Head.

Periodic snapshot and backup

From Longhorn UI, periodic snapshots and backups can be scheduled.

To set the schedule, you go to the Volume Details view in Longhorn. You will then set:

  • scheduleType,Backup (backup)orThe snapshot (the snapshot)
  • CRON expression specifies the time when a backup or snapshot will be created
  • The number of backups or snapshots to be retained
  • Any label that should be applied to a backup or snapshot (Any labels)

Longhorn then automatically creates a snapshot or backup for the current user as soon as the volume is attached to a node.

Periodic snapshots can be configured using Longhorn UI or Kubernetes StorageClass.

Note: To avoid a recurring problem where recurring Jobs may overwrite the old backup/snapshot with the same backup and empty snapshot when the volume has not had new data for a long time, Longhorn does the following:

  1. Recurring backup jobA new backup is made only if there is new data on the volume since the last backup.
  2. Recurring snapshot jobOnly in theVolume header (volume head)Take a new snapshot only when there is new data (live data) in.

Set up periodic snapshots using the Longhorn UI

You can configure periodic snapshots and backups from the volume details page. To navigate to this page, click Volume, and then click the Volume name.

Set Recurring Jobs using StorageClass

Planned backups and snapshots can be configured in the StorageClass recurringJobs parameter.

Any future volumes created using this StorageClass will automatically set these Recurring Jobs.

The recurringJobs field should follow the following JSON format:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: longhorn
    provisioner: driver.longhorn.io
    parameters:
      numberOfReplicas: "3"
      staleReplicaTimeout: "30"
      fromBackup: ""
      recurringJobs: '[ { "name":"snap", "task":"snapshot", "cron":"*/1 * * * *", "retain":1 }, { "name":"backup", "task":"backup", "cron":"*/2 * * * *", "retain":1 } ]'
Copy the code

The following parameters should be specified for each recurring job:

  1. Name: indicates the name of a job. Do not use duplicate names in a recurringJobs. The length of name cannot exceed 8 characters.

  2. Task: Indicates the type of a job. It only supports snapshot (periodic snapshot creation) or Backup (periodic snapshot creation and backup).

  3. Cron: crON expression. It tells the execution time of a job.

  4. Retain: How many snapshots/backups will Longhorn reserve for a job? It should be no less than 1.

A Recurring Job is allowed when a volume is detached

Longhorn provides the allow-project-job-while-volume-detached setting, so you can recurring backups even if the volume has been detached. You can find this setting in the Longhorn UI.

When this setting is enabled, Longhorn will automatically attach the volume and take a snapshot/backup when a recurring snapshot/backup is required.

Note that during volume automatic attachment, the volume is not ready to handle the workload. The Workload must wait until the recurring job is completed.

Disaster volume

A DISASTER recovery (DR) volume is a special volume used to store data in a backup cluster when the entire primary cluster fails. Disaster recovery rolls are used to improve the resiliency of Longhorn rolls.

For a disaster recovery volume, Last Backup represents the latest Backup of its original Backup volume.

If the icon representing the disaster volume is gray, it indicates that the volume is recovering Last Backup and cannot be activated. If the icon is blue, Last Backup has been restored.

Create a DR volume

Prerequisites: Set up two Kubernetes clusters. They will be called clusters A and B. Install Longhorn on both clusters and set the same backup target on both clusters.

  1. In a clusterAIn, ensure the original volumeXA backup has been created or scheduledrecurring backups.
  2. In a clusterBOn the backup page, select a backup volumeXAnd then create a Dr VolumeY. You are strongly advised to use the name of the backup volume as the name of the Dr Volume.
  3. LonghornWill automaticallyDRYAttach to a random node. thenLonghornPolling volumes will beginXAnd incrementally restore it to the volumeY.

Setting a Backup Target

The backup target is the endpoint used to access BackupStore in Longhorn. Backupstore is an NFS or S3-compatible server used to store backups of Longhorn volumes. The backup target can be set in Settings/General/BackupTarget.

If you don’t have access to AWS S3 or want to try backup storage first, we also provide a way to set up local S3 test backup storage using MinIO.

Longhorn also supports setting a recurring snapshot/ Backup job for the volume through the Longhorn UI or Kubernetes Storage Class.

Example Set AWS S3 backup storage

  1. Create a new bucket in AWS S3.

  2. Set permissions for Longhorn. There are two options for setting credentials. First, you can set up Kubernetes Secret using the credentials of an AWS IAM user. The second is that you can use third-party applications to manage temporary AWS IAM permissions for pods using Annotations, rather than using AWS credentials.

    • Option 1: Create Kubernetes Secret using IAM user credentials

      1. Follow the instructions to create a new AWS IAM user and set the following permissions. Edit the Resource section to use your S3 bucket name:

        {
          "Version": "2012-10-17"."Statement": [{"Sid": "GrantLonghornBackupstoreAccess0"."Effect": "Allow"."Action": [
                "s3:PutObject"."s3:GetObject"."s3:ListBucket"."s3:DeleteObject"]."Resource": [
                "arn:aws:s3:::<your-bucket-name>"."arn:aws:s3:::<your-bucket-name>/*"]]}}Copy the code
      2. Create a Kubernetes Secret named AWS-Secret in the namespace where Longhorn is placed (longhorn-system by default). Secret must be created in the Longhorn-system namespace in order for Longhorn to access it:

        kubectl create secret generic <aws-secret> \
            --from-literal=AWS_ACCESS_KEY_ID=<your-aws-access-key-id> \
            --from-literal=AWS_SECRET_ACCESS_KEY=<your-aws-secret-access-key> \
            -n longhorn-system
        Copy the code
    • Option 2: Set permissions using IAM interim credentials through AWS STS Sumerole (Kube2IAM or KIAM)

      Kube2iam or Kiam is a Kubernetes application that allows managing AWS IAM permissions for pods using Annotations rather than manipulating AWS credentials. Follow the instructions in Kube2iam or Kiam’s GitHub repository to install it into the Kubernetes cluster.

      1. Create a new AWS IAM role for the AWS S3 service and set the following permissions:

        {
          "Version": "2012-10-17"."Statement": [{"Sid": "GrantLonghornBackupstoreAccess0"."Effect": "Allow"."Action": [
                "s3:PutObject"."s3:GetObject"."s3:ListBucket"."s3:DeleteObject"]."Resource": [
                "arn:aws:s3:::<your-bucket-name>"."arn:aws:s3:::<your-bucket-name>/*"]]}}Copy the code
      2. Edit AWS IAM roles using the following trust relationships:

        {
          "Version": "2012-10-17"."Statement": [{"Effect": "Allow"."Principal": {
                  "Service": "ec2.amazonaws.com"
              },
              "Action": "sts:AssumeRole"
            },
            {
              "Effect": "Allow"."Principal": {
                "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<AWS_EC2_NODE_INSTANCE_ROLE>"
              },
              "Action": "sts:AssumeRole"}}]Copy the code
      3. Create a Kubernetes Secret named AWS-secret in Longhorn’s namespace (longhorn-system by default). Secret must be created in the Longhorn-system namespace in order for Longhorn to access it:

        kubectl create secret generic <aws-secret> \
            --from-literal=AWS_IAM_ROLE_ARN=<your-aws-iam-role-arn> \
            -n longhorn-system
        Copy the code
  3. Go to Longhorn UI. In the top navigation bar, click Settings. In the Backup section, set the Backup Target to:

    s3://<your-bucket-name>@<your-aws-region>/
    Copy the code

    Make sure there is a slash at the end, otherwise an error will be reported. Subdirectories (prefixes) can be used:

    s3://<your-bucket-name>@<your-aws-region>/mypath/
    Copy the code

    Also make sure that you set < your-AWs-region > in the URL.

    For example, for the AWS, you can: docs.aws.amazon.com/AmazonRDS/l… Find region codes.

    For Google Cloud Storage, you can: cloud.google.com/storage/doc… Find the area code.

  4. Set Backup Target Credential Secret(Backup Target Credential Secret) to:

    aws-secret
    Copy the code

    This is the secret name with AWS credentials or THE AWS IAM role.

Result: Longhorn can store backups in S3. To create a backup, see this section.

Note: If you operate Longhorn behind the agent and you want to use AWS S3 as backup storage, you must provide Longhorn information about your agent in AWS -Secret, as shown below:

kubectl create secret generic <aws-secret> \ --from-literal=AWS_ACCESS_KEY_ID=<your-aws-access-key-id> \ --from-literal=AWS_SECRET_ACCESS_KEY=<your-aws-secret-access-key> \ --from-literal=HTTP_PROXY=<your-proxy-ip-and-port> \  --from-literal=HTTPS_PROXY=<your-proxy-ip-and-port> \ --from-literal=NO_PROXY=<excluded-ip-list> \ -n longhorn-systemCopy the code

Ensure that NO_PROXY includes network addresses, network address ranges and domains that should not be used as proxies. In order for Longhorn to run, the minimum values required for NO_PROXY are:

  • localhost
  • 127.0.0.1
  • 0.0.0.0
  • 10.0.0.0/8 (K8s components’ IPs)
  • 192.168.0.0/16 (internal IPs in the cluster)

Example Set the local test backup storage

We provide two test purpose backupstores based on NFS server and MinIO S3 server in./deploy/backupstores.

  1. After longhorn-system is created, use the following command to set up a MinIO S3 server for backup storage.

    kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml
    Copy the code
  2. Go to Longhorn UI. In the top navigation bar, click Settings. In the Backup section, set the Backup Target to

    s3://backupbucket@us-east-1/
    Copy the code

    And set Backup Target Credential Secret to:

    minio-secret
    Copy the code

    Minio-secret YAMl is as follows:

    apiVersion: v1
    kind: Secret
    metadata:
      name: minio-secret
      namespace: longhorn-system
    type: Opaque
    data:
      AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
      AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
      AWS_ENDPOINTS: aHR0cHM6Ly9taW5pby1zZXJ2aWNlLmRlZmF1bHQ6OTAwMA== # https://minio-service.default:9000
      AWS_CERT: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMRENDQWhTZ0F3SUJBZ0lSQU1kbzQycGhUZXlrMTcvYkxyWjVZRHN3RFFZSktvWklodmNOQVFFTEJR QXcKR2pFWU1CWUdBMVVFQ2hNUFRHOXVaMmh2Y200Z0xTQlVaWE4wTUNBWERUSXdNRFF5TnpJek1EQXhNVm9ZRHpJeApNakF3TkRBek1qTXdNREV4V2pBYU1S Z3dGZ1lEVlFRS0V3OU1iMjVuYUc5eWJpQXRJRlJsYzNRd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEWHpVdXJnUFpEZ3pU M0RZdWFlYmdld3Fvd2RlQUQKODRWWWF6ZlN1USs3K21Oa2lpUVBvelVVMmZvUWFGL1BxekJiUW1lZ29hT3l5NVhqM1VFeG1GcmV0eDBaRjVOVgpKTi85ZWFJ NWRXRk9teHhpMElPUGI2T0RpbE1qcXVEbUVPSXljdjRTaCsvSWo5Zk1nS0tXUDdJZGxDNUJPeThkCncwOVdkckxxaE9WY3BKamNxYjN6K3hISHd5Q05YeGho Rm9tb2xQVnpJbnlUUEJTZkRuSDBuS0lHUXl2bGhCMGsKVHBHSzYxc2prZnFTK3hpNTlJeHVrbHZIRXNQcjFXblRzYU9oaVh6N3lQSlorcTNBMWZoVzBVa1Ja RFlnWnNFbQovZ05KM3JwOFhZdURna2kzZ0UrOElXQWRBWHExeWhqRDdSSkI4VFNJYTV0SGpKUUtqZ0NlSG5HekFnTUJBQUdqCmF6QnBNQTRHQTFVZER3RUIv d1FFQXdJQ3BEQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFQQmdOVkhSTUIKQWY4RUJUQURBUUgvTURFR0ExVWRFUVFxTUNpQ0NXeHZZMkZzYUc5emRJ SVZiV2x1YVc4dGMyVnlkbWxqWlM1awpaV1poZFd4MGh3Ui9BQUFCTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDbUZMMzlNSHVZMzFhMTFEajRwMjVjCnFQ RUM0RHZJUWozTk9kU0dWMmQrZjZzZ3pGejFXTDhWcnF2QjFCMVM2cjRKYjJQRXVJQkQ4NFlwVXJIT1JNU2MKd3ViTEppSEtEa0Jmb2U5QWI1cC9VakpyS0tu ajM0RGx2c1cvR3AwWTZYc1BWaVdpVWorb1JLbUdWSTI0Q0JIdgpnK0JtVzNDeU5RR1RLajk0eE02czNBV2xHRW95YXFXUGU1eHllVWUzZjFBWkY5N3RDaklK UmVWbENtaENGK0JtCmFUY1RSUWN3cVdvQ3AwYmJZcHlERFlwUmxxOEdQbElFOW8yWjZBc05mTHJVcGFtZ3FYMmtYa2gxa3lzSlEralAKelFadHJSMG1tdHVy M0RuRW0yYmk0TktIQVFIcFc5TXUxNkdRakUxTmJYcVF0VEI4OGpLNzZjdEg5MzRDYWw2VgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    Copy the code

    For more information on creating Secret, see the Kubernetes documentation. Secret must be created in the Longhorn-system namespace for Longhorn to access it.

    Note: Make sure to use echo -n when generating base64 encoding, otherwise new lines will be added at the end of the string and S3 access will fail.

  3. Click the Backup TAB in the UI. It should report an empty list without any errors.

Result: Longhorn can store backups in S3.

S3 communication using a self-signed SSL certificate

If you want to use a self-signed SSL certificate, you can specify AWS_CERT in Kubernetes Secret supplied to Longhorn. See the example in setting up local test backup storage. Note that the certificate needs to be in PEM format and must be its own CA. Or you must include a certificate chain that contains the CA certificate. To include multiple certificates, simply connect different certificates (PEM files).

Enable virtual-Hosted -style access for S3-compatible backup storage

You may need to enable this new addressing method for S3 compatible backup storage in the following situations

  1. You want to switch to this new access mode right away so you don’t have to worry about Amazon S3 path deprecation plans;
  2. You are usingbackupstoreOnly supportvirtual-hosted-styleFor example:Alibaba Cloud(Aliyun) OSS;
  3. You have configuredMINIO_DOMAINEnvironment variablesEnable the virtual-host-style request for the MinIO server;
  4. This error. error: AWS Error: SecondLevelDomainForbidden Please use virtual hosted style to access. .....Be triggered.

Enable the virtual-hosted-style access method

  1. Add a new field VIRTUAL_HOSTED_STYLE with a value of true to your backup target Secret. Such as:

    apiVersion: v1
    kind: Secret
    metadata:
      name: s3-compatible-backup-target-secret
      namespace: longhorn-system
    type: Opaque
    data:
      AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5
      AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5
      AWS_ENDPOINTS: aHR0cHM6Ly9taW5pby1zZXJ2aWNlLmRlZmF1bHQ6OTAwMA==
      VIRTUAL_HOSTED_STYLE: dHJ1ZQ== # true
    Copy the code
  2. Deploy/update (Deploy/update) secret, and set it in Settings/General/BackupTargetSecret.

NFS Backup storage

To use an NFS server as backup storage, the NFS server must support NFSv4.

The destination URL should look like this:

nfs://longhorn-test-nfs-svc.default:/opt/backupstore
Copy the code

Result: Longhorn can store backups in NFS.

Create backup

Backups in Longhorn are objects in backup storage outside the cluster. The backup of the snapshot is copied to the backup storage, and the endpoint accessing the backup storage is the backup target.

Prerequisite: You must set a backup target. For more information, see Setting a Backup Target. If BackupTarget is not set, an error occurs.

To create a backup,

  1. Navigate to the Volume menu.
  2. Select the volume to back up.
  3. Click Create Backup.
  4. Add the appropriate labels and clickOK.

Result: Backup is created. To view it, click Backup in the top navigation bar.

Recovering from backup

Longhorn can easily restore a backup to a volume.

When restoring a backup, a volume with the same name is created by default. If a volume with the same name as the backup already exists, the backup will not be restored.

To restore the backup,

  1. Navigate to the Backup menu
  2. Select the Backup you want to Restore, and then click Restore Latest Backup
  3. In the Name field, select the volume you want to restore
  4. Click OK to

Result: The restored Volume is available on the Volume page.

Restore volumes for Kubernetes StatefulSets

Longhorn supports restore backups, and one use case for this feature is restoring data used in Kubernetes StatefulSet, which requires restoring one volume for each copy backed up.

To restore, follow the instructions below. The following example uses a StatefulSet with one volume attached to each Pod and two copies.

  1. Connect to the Longhorn UI page in your Web browser. Under the Backup TAB, select the name of the StatefulSet volume. Click the drop-down menu of the volume entry and restore it. Name the volume Persistent Volumes that you can easily reference later.

    • Repeat this step for each volume that you want to restore.
    • For example, if you use apvc-01apvc-02bRestore two copies of the volumeStatefulSet, the recovery may be as follows:
    Backup Name Restored Volume
    pvc-01a statefulset-vol-0
    pvc-02b statefulset-vol-1
  2. In Kubernetes, create a Persistent Volume for each Longhorn Volume created. Name the Volume Persistent Volume Claims that you can easily refer to later. Storage capacity, numberOfReplicas, storageClassName, and volumeHandle must be replaced below. In this example, we refer to statefulset-vol-0 and statefulset-vol-1 in Longhorn and use Longhorn as our storageClassName.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: statefulset-vol-0
    spec:
      capacity:
        storage: <size> # must match size of Longhorn volume
      volumeMode: Filesystem
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      csi:
        driver: driver.longhorn.io # driver must match this
        fsType: ext4
        volumeAttributes:
          numberOfReplicas: <replicas> # must match Longhorn volume value
          staleReplicaTimeout: '30' # in minutes
        volumeHandle: statefulset-vol-0 # must match volume name from Longhorn
      storageClassName: longhorn # must be same name that we will use later
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: statefulset-vol-1
    spec:
      capacity:
        storage: <size>  # must match size of Longhorn volume
      volumeMode: Filesystem
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      csi:
        driver: driver.longhorn.io # driver must match this
        fsType: ext4
        volumeAttributes:
          numberOfReplicas: <replicas> # must match Longhorn volume value
          staleReplicaTimeout: '30'
        volumeHandle: statefulset-vol-1 # must match volume name from Longhorn
      storageClassName: longhorn # must be same name that we will use later
    Copy the code
  3. In namespace, StatefulSet is deployed to create PersistentVolume Claims for each PersistentVolume. Names of Persistent Volume claims must follow the following naming scheme:

    <name of Volume Claim Template>-<name of StatefulSet>-<index>
    Copy the code

    StatefulSet Pod is zero-indexed. In this example, the name of the Volume Claim Template is Data, the name of the StatefulSet is WebApp, and there are two copies, indexes 0 and 1.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: data-webapp-0
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi # must match size from earlier
    storageClassName: longhorn # must match name from earlier
    volumeName: statefulset-vol-0 # must reference Persistent Volume
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: data-webapp-1
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi # must match size from earlier
    storageClassName: longhorn # must match name from earlier
    volumeName: statefulset-vol-1 # must reference Persistent Volume
    Copy the code
  4. Create StatefulSet:

    apiVersion: apps/v1beta2
    kind: StatefulSet
    metadata:
      name: webapp # match this with the PersistentVolumeClaim naming scheme
    spec:
      selector:
        matchLabels:
          app: nginx # has to match .spec.template.metadata.labels
      serviceName: "nginx"
      replicas: 2 # by default is 1
      template:
        metadata:
          labels:
            app: nginx # has to match .spec.selector.matchLabels
        spec:
          terminationGracePeriodSeconds: 10
          containers:
          - name: nginx
            image: K8s. GCR. IO/nginx - slim: 0.8
            ports:
            - containerPort: 80
              name: web
            volumeMounts:
            - name: data
              mountPath: /usr/share/nginx/html
      volumeClaimTemplates:
      - metadata:
          name: data # match this with the PersistentVolumeClaim naming scheme
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: longhorn # must match name from earlier
          resources:
            requests:
              storage: 2Gi # must match size from earlier
    Copy the code

Result: The recovered data should now be accessible from within StatefulSet Pods.

Enable CSI snapshot support on the cluster

A prerequisite for

CSI snapshot support is available for Kubernetes version >= 1.17.

The Kubernetes distribution is responsible for deploying the Snapshot Controller and associated custom resource definitions.

For more information, see CSI Volume Snapshots.

Add a defaultVolumeSnapshotClass

Ensure the availability of Snapshot Beta CRD. Then create a default VolumeSnapshotClass.

kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
metadata:
  name: longhorn
driver: driver.longhorn.io
deletionPolicy: Delete
Copy the code

If you are updating from a previous Longhorn release in an Air Gap environment

  1. updatecsi-provisionerThe mirror toLonghornio/csi - provisioner: v1.6.0
  2. updatecsi-snapshotterThe mirror toLonghornio/csi - snapshotter: v2.1.1

If your Kubernetes distribution is not bundled with Snapshot Controller

You can manually install these components by performing the following steps.

Note that the Snapshot Controller YAML file mentioned below is deployed into the default namespace.

A prerequisite for

For general purposes, update snapshot Controller YAML with the appropriate namespace prior to installation.

For example, on the Vanilla Kubernetes cluster, the namespace is updated from default to kube-system before issuing the kubectl create command.

Install Snapshot Beta CRDs:

  1. From github.com/kubernetes-… The download file
  2. runkubectl create -f client/config/crd.
  3. Do this once for each cluster.

Install Common Snapshot Controller:

  1. From github.com/kubernetes-… The download file
  2. willnamespaceUpdate to a value appropriate for your environment (for example:kube-system)
  3. runkubectl create -f deploy/kubernetes/snapshot-controller
  4. Do this once for each cluster.

For additional information, see the Usage section in the Kubernetes external-Snapshotter Git repo.

Create backups through CSI

Backups in Longhorn are objects in the out-of-cluster backupstore, and the endpoints accessing the backupstore are backup targets.

To create backups programmatically, you can use the generic Kubernetes CSI snapshot mechanism.

Prerequisites: CSI Snapshot support needs to be enabled on your cluster. If your Kubernetes distribution does not provide the Kubernetes Snapshot Controller and snapshot-related custom resource definitions, you need to deploy them manually see Enable CSI Snapshot Support for more information

Create backups through CSI Mechanism

To create a backup using the CSI mechanism, create a Kubernetes VolumeSnapshot object through Kubectl.

Result: Backup is created. The creation of the VolumeSnapshot object results in the creation of the VolumeSnapshotContent Kubernetes object.

VolumeSnapshotContent refers to its VolumeSnapshotContent. SnapshotHandle field called bs: / / backup – volume/backup – the name of Longhorn backup.

How CSI Mechanism works

When creating a VolumeSnapshot object using Kubectl, the volumesnapShot. uuid field is used to identify the Longhorn Snapshot and the associated VolumeSnapshotContent object.

This will create a new Longhorn Snapshot named snapshot-uUID.

Then start backup for the snapshot and return CSI Request.

Then create a VolumeSnapshotContent object named SnapContent-uUID.

CSI Snapshotter Sidecar periodically queries the Longhorn CSI plug-in to evaluate backup status.

After the completion of the backup, VolumeSnapshotContent. ReadyToUse flag is set to true.

Check the backup

To view a Backup, click Backup on the top navigation bar and navigate to the VolumeSnapshotContent. SnapshotHandle referred to in the Backup roll (Backup – volume).

VolumeSnapshot sample

Here is an example VolumeSnapshot object. The source needs to point to the Longhorn Volume PVC for which a backup should be created.

The volumeSnapshotClassName field points to a VolumeSnapshotClass.

We created a default class named Longhorn that uses Delete as its deletionPolicy.

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: test-snapshot-pvc
spec:
  volumeSnapshotClassName: longhorn
  source:
    persistentVolumeClaimName: test-vol
Copy the code

If you want to preserve the associated backup of the volume when deleting VolumeSnapshot, create a new VolumeSnapshotClass and set Retain to deletionPolicy.

For more information about snapshot classes, see the Kubernetes documentation for VolumeSnapshotClasses.

Restore backups through CSI

Longhorn can easily restore a backup to a volume.

To restore backups programmatically, you can use the generic Kubernetes CSI snapshot mechanism.

A prerequisite for

CSI snapshot support needs to be enabled on your cluster.

If your Kubernetes distribution does not provide the Kubernetes Snapshot Controller and custom resource definitions associated with snapshots, you will need to deploy them manually.

throughVolumeSnapshotObject Recovery and Backup

Create a PersistentVolumeClaim object with the dataSource field pointing to the existing VolumeSnapshot object.

Csi -provisioner takes it and instructs the Longhorn CSI driver to configure the new volume using the data from the associated backup.

You can use the same mechanism to restore Longhorn backups that have not yet been created through the CSI mechanism.

Here is a PersistentVolumeClaim example. The dataSource field needs to point to the existing VolumeSnapshot object.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-restore-snapshot-pvc
spec:
  storageClassName: longhorn
  dataSource:
    name: test-snapshot-pvc
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
Copy the code

Restore is not associatedVolumeSnapshotThe backup

To restore a Longhorn backup that was not created through the CSI mechanism, you must first manually create VolumeSnapshot and VolumeSnapshotContent objects for the backup.

Create a VolumeSnapshotContent object and set the snapshotHandle field to BS ://backup-volume/backup-name.

Backup-volume and backup-name values can be retrieved from the Backup page of Longhorn UI.

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotContent
metadata:
  name: test-existing-backup
spec:
  volumeSnapshotClassName: longhorn
  driver: driver.longhorn.io
  deletionPolicy: Delete
  source:
    # NOTE: change this to point to an existing backup on the backupstore
    snapshotHandle: bs://test-vol/backup-625159fb469e492e
  volumeSnapshotRef:
    name: test-snapshot-existing-backup
    namespace: default
Copy the code

Create the associated VolumeSnapshot object and set the name field to test-snapshot-existing-backup, The source field by volumeSnapshotContentName field reference VolumeSnapshotContent object.

This is different from create backup, in this case, the source field by persistentVolumeClaimName PerstistentVolumeClaim reference.

Only one type of reference can be set to a VolumeSnapshot object.

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: test-snapshot-existing-backup
spec:
  volumeSnapshotClassName: longhorn
  source:
    volumeSnapshotContentName: test-existing-backup
Copy the code

Now you can create a PerstistentVolumeClaim object that references the newly created VolumeSnapshot object.