The content is from the official Longhorn 1.1.2 English technical manual.

A series of

  • What’s a Longhorn?
  • Longhorn Cloud Native distributed block storage solution design architecture and concepts
  • Longhorn Enterprise Cloud Native Container Storage solution – Deployment chapter

Create a Longhorn volume

In this tutorial, you will learn how to create Kubernetes persistent storage resources for persistent volumes (PV) and persistent volume declarations (PVC) corresponding to Longhorn volumes. You will use Kubectl to dynamically configure storage for workloads that use the Longhorn storage Class.

This section assumes that you understand how Kubernetes Persistent Storage works. See the Kubernetes documentation for more information.

Create the Longhorn volume using Kubectl

First, you will create a Longhorn StorageClass. The Longhorn StorageClass contains parameters for configuring persistent volumes.

Next, create a PersistentVolumeClaim that references StorageClass. Finally, the PersistentVolumeClaim is mounted in Pod as a volume.

When a Pod is deployed, the Kubernetes Master checks the PersistentVolumeClaim to ensure that resource requests can be met. If storage is available, Kubernetes Master will create the Longhorn volume and bind it to Pod.

  1. Use the following command to create a StorageClass named Longhorn:

    Kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/v1.1.2/examples/storageclass.yamlCopy the code

    The following example StorageClass is created:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: longhorn
    provisioner: driver.longhorn.io
    allowVolumeExpansion: true
    parameters:
      numberOfReplicas: "3"
      staleReplicaTimeout: "2880" # 48 hours in minutes
      fromBackup: ""
    #  diskSelector: "ssd,fast"
    #  nodeSelector: "storage,fast"
    #  recurringJobs: '[{"name":"snap", "task":"snapshot", "cron":"*/1 * * * *", "retain":1},
    #                   {"name":"backup", "task":"backup", "cron":"*/2 * * * *", "retain":1,
    #                    "labels": {"interval":"2m"}}]'
    Copy the code
  2. Create a Pod using the Longhorn volume by running the following command:

    Kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/v1.1.2/examples/pod_with_pvc.yamlCopy the code

    A Pod named volume-test and a PersistentVolumeClaim named Longhorn-Volv-PVC were started. PersistentVolumeClaim references Longhorn StorageClass:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: longhorn-volv-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: longhorn
      resources:
        requests:
          storage: 2Gi
    Copy the code

    PersistentVolumeClaim is mounted in Pod as a volume:

    apiVersion: v1
    kind: Pod
    metadata:
      name: volume-test
      namespace: default
    spec:
      containers:
      - name: volume-test
        image: nginx:stable-alpine
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: volv
          mountPath: /data
        ports:
        - containerPort: 80
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: longhorn-volv-pvc
    Copy the code

Bind the workload to the PV without Kubernetes StorageClass

The Longhorn StorageClass can be used to bind the workload to the PV without creating the StorageClass object in Kubernetes.

Since the StorageClass is also a field used to match PVC to PV, it does not have to be created by the Provisioner, you can manually create the PV using the custom StorageClass name, Then create the PVC name that requires the same StorageClass.

When a PVC requests a StorageClass that does not exist as a Kubernetes resource, Kubernetes will try to bind your PVC to a PV with the same StorageClass name. StorageClass will be used to find labels that match PVS, and only existing PVS labeled with the StorageClass name will be used.

If the PVC names a StorageClass, Kubernetes will:

  1. Find tags andStorageClassMatched existingPV
  2. Find existingStorageClass KubernetesResources. ifStorageClassIt exists and will be used for creationPV.

Create Longhorn volumes using the Longhorn UI

Since the Longhorn volume already exists when the PV/PVC is created, StorageClass is not required to dynamically configure the Longhorn volume. However, the field storageClassName should be set in PVC/PV for PVC bounding purpose. You do not need to create the StorageClass object.

By default, the PV/PVC StorageClass created by Longhorn is Longhorn-static. You can set the value in setting-general-default Longhorn Static StorageClass Name as required.

Users need to manually delete the PVCS and PVS created by Longhorn.

Create a PV/PVC for an existing Longhorn volume

Users can now create PV/PVC for existing Longhorn volumes through our Longhorn UI. Newly created pods can only use detached volumes.

Delete the Longhorn volume

Once you are finished with the Longhorn volume for storage, there are several ways to remove the volume, depending on how you are using the volume.

Delete a volume using Kubernetes

Note: This method is only applicable to the case where the StorageClass provisioned the volume and the Longhorn volume PersistentVolume set its Reclaim Policy to Delete.

You can delete volumes through Kubernetes by deleting PersistentVolumeClaim that uses the issued Longhorn volumes. This will cause Kubernetes to clean up the PersistentVolume and then delete the volumes in Longhorn.

Delete a volume using Longhorn

All Longhorn volumes, regardless of how they were created, can be deleted through the Longhorn UI.

To delete a single Volume, go to the Volume page on the UI. In the Operation drop-down menu, select Delete. Before deleting a volume, the system prompts you to confirm the deletion.

To Delete multiple volumes at the same time, you can select multiple volumes on the Volume page and select Delete at the top.

Note: If Longhorn detects that a volume is bound to PersistentVolume or PersistentVolumeClaim, these resources will also be deleted once you delete the volume. You will receive a warning in the UI before continuing with the deletion. Longhorn will also issue a warning when deleting an attached volume because it may be in use.

Node space usage

In this section, you’ll get a better understanding of the space usage information presented by the Longhorn UI.

Total cluster space usage

On the Dashboard page, Longhorn will display cluster space usage information:

Schedulable: The actual space available for Longhorn volume scheduling.

Reserved: Space Reserved for other applications and systems.

Used: The actual space (space reserved) Used by Longhorn, the system, and other applications.

Disabled: The total space of disks/nodes of the Longhorn volume cannot be scheduled.

Space usage per node

On the Node page, Longhorn will display space allocation, schedule, and Usage info for each Node:

Size column: The maximum actual available space available for the Longhorn volume. It is equal to the total disk space of the node minus the reserved space.

Allocated column: The number on the left is ** Volume scheduling ** used size and does not mean the space has been Allocated to Longhorn volume data storage. The correct number is the Max Size for volume Provisioning, which is the result of Size multiplied by Storage Over Provisioning Percentage. (In the figure above, Storage Over Provisioning Percentage is 500.) Therefore, the difference between these two numbers (which we call allocable space) determines whether a volume copy can be scheduled to this node.

Used column: The left portion represents the space currently Used by the node. The entire bar represents the total space of the nodes.

Note that when Storage Over Provisioning Percentage is set to a value greater than 100, the available space may be larger than the actual available space of the node. If the volume usage is high, a large amount of historical data will be stored in the volume snapshot. Be careful to use a large value for this setting.

The volume size

In this section, you will better understand the concepts related to volume size.

volumeSize

  • This is what you set up when you create the volume, and we will refer to it throughout this documentnominal sizeTo avoid ambiguity.
  • Because the volume itself is justKubernetesOne of theCRDObject, and the data is stored in each copy, so this is actually per-copynominal size.
  • We will call this field"nominal size"The reason is thatLonghornCopy to useSparse FilesTo store data, which is the apparent size of sparse files (the maximum size they can expand to). The actual size used for each copy is not equal to thisnominal size.
  • Based on thisnominal sizeCopies are scheduled to those nodes that have enough space to allocate during volume creation.
  • nominal sizeThe value determines the maximum available space of the volume while it is in use. In other words, the volume cannot hold more data that is currently activenominal size.

volumeActual Size

  • actual sizesaideachThe actual space used by the replica on the corresponding node.
  • Since all historical and active data stored in the snapshot will be calculated to the actual size, the final value can be greater thannominal size.
  • The actual size is displayed only when the volume is running.

One helps to understand the volumeSizeAnd volumeActual sizeExample:

Here, we’ll have an example to explain how volume size and actual size change after a bunch of I/O and snapshot related operations.

The illustration shows the file organization of one replica. Snapshots and volume heads are actually sparse files we mentioned above.

  1. To create a5GiVolume, and then mount it to the node. Such asFigure 1Shown below.
    • For the empty volume (empty volume) in namesize5GiAnd theactual sizeIt is almost0.
    • There’s some meta information in the volume, soactual sizeNot entirely0.
  2. Write to the volume mount point2GiData (data#1) and create a snapshot (snapshot#1). See also in the illustrationFigure 2.
    • nowdata#1Stored in thesnapshot#1,actual size2Gi.

3. Delete it from the mount pointdata#1. –data#1The truth about deletion isdata#1In ** the Filesystem level ** is marked as deleted (e.gext4In theinodeDelete). Due to theLonghorn runs at the block level, do not know the file system, so the storage will not be released after deletiondata#1Disk block/space (blocks/space)。 – data#1File system level delete information is stored in the current volume header (volume head) file. forsnapshot#1.data#1It is still reserved as historical data. –actual sizeIs still2Gi.4. Add a value to the volume mount4GiData (data#2), and then take a snapshot (snapshot#2). See also in the illustrationFigure 3. – nowactual size6GiThat is greater than the nominalsize. – At block level2There is overlap between snapshots (seeFigure 3In the2A snapshot), becausedata#1snapshot#2Is marked as deleted, so the file system reuses the space.5. Removesnapshot#1Wait until the snapshot is cleared. See also in the illustrationFigure 4. Here –Longhornactuallysnapshot#1snapshot#2A merger. – For overlaps during the merger, newer data (data#2) will remain in the block. Then delete some historical data and shrink the volume (in the example from6.1 Gi4.65 Gi).

View workloads that use volumes

Users can now identify the current workload or workload history of existing Longhorn persistent volumes (PV), as well as the history of their binding to the persistent volume declaration (PVC).

From Longhorn UI, go to the Volume TAB. Each Longhorn volume is listed on the page. The Attached To column shows the workload name of the volume being used. If you click Workload Name, you can see more details, including Workload Type, POD Name, and Status.

Workload information is also available on the Longhorn Volume details page. To see details, click volume name:

State: attached
...
Namespace:default
PVC Name:longhorn-volv-pvc
PV Name:pvc-0edf00f3-1d67-4783-bbce-27d4458f6db7
PV Status:Bound
Pod Name:teststatefulset-0
Pod Status:Running
Workload Name:teststatefulset
Workload Type:StatefulSet
Copy the code

history

After the Workload no longer uses Longhorn Volume, the volume details page displays the historical status of the workload that recently used the volume:

Pod last used: a few seconds ago... Last Pod Name: teststatefulset-0 Last Workload Name: teststatefulset Last Workload Type: StatefulsetCopy the code

If these fields are set, they indicate that no workload is currently using the volume.

When the PVC is no longer bound to a volume, the following state is displayed:

Last time bound with PVC:a few seconds ago
Last time used by Pod:32 minutes ago
Last Namespace:default
Last Bounded PVC Name:longhorn-volv-pvc
Copy the code

If the Last Time bound with PVC field is set, no PVC is bound to the volume. The relevant fields display the most recent workload using this volume.

Store tags

An overview of the

The Storage Tag feature only allows certain nodes or disks to store Longhorn volume data. For example, high-performance disks that can be labeled as FAST, SSD, or NVMe or high-performance nodes that can be labeled as BareMetal can be used only for performance-sensitive data.

This function supports both disks and nodes.

Set up the

You can set the label using the Longhorn UI:

  1. Node -> Select one node -> Edit Node and Disks
  2. Click on the+New Node Tag+New Disk TagGo to add a new label.

All existing scheduled replicas on nodes or disks will not be affected by the new labels.

usage

When multiple labels are specified for a volume, the disks and nodes to which the disks belong must have all the specified labels before they can be used.

UI

When creating a volume, specify the disk tag and node tag on the UI.

Kubernetes

Use the Kubernetes StorageClass setting to specify labels.

Such as:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: longhorn-fast
provisioner: driver.longhorn.io
parameters:
  numberOfReplicas: "3"
  staleReplicaTimeout: "480" # 8 hours in minutes
  diskSelector: "ssd"
  nodeSelector: "storage,fast"
Copy the code

history

  • Original feature request
  • The Available since v0.6.0

Volume increase

The volume is expanded in two phases. First, Longhorn extends the front end (block devices), and then the file system.

To protect the front-end extensions from accidental data reads and writes (R/W), Longhorn supports only offline extensions. Detached Volumes will be automatically attached to random nodes with maintenance mode.

Rebuilding and adding copies are not allowed during expansion, and expansion is not allowed while rebuilding or adding copies.

If the volume is not extended through the CSI interface (for example, Kubernetes prior to V1.16), the capacity of the corresponding PVC and PV does not change.

precondition

  • Longhorn version must be V0.8.0 or later.
  • The volume to be extended must be inDetached (separated)State.

A Longhorn volume

There are two ways to extend Longhorn Volume: using PersistentVolumeClaim (PVC) and using the Longhorn UI.

If you are using Kubernetes V1.14 or V1.15, you can only use the Longhorn UI extension volume.

Through the PVC

This method only applies to:

  • Kubernetesversionv1.16Or later.
  • PVCKubernetesuseLonghorn StorageClassDynamic configuration.
  • relatedStorageClassIn the fieldallowVolumeExpansionShould betrue.

This approach is recommended if you can, because PVC and PV update automatically and everything stays the same after extension.

Usage: find the Longhorn volume corresponding PVC, PVC spec. Then change request resources. Requests. Storage:

apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"longhorn-simple-pvc","namespace": "default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"longho rn"}} pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: driver.longhorn.io creationTimestamp: "2019-12-21T01:36:16Z" finalizers: - kubernetes.io/pvc-protection name: longhorn-simple-pvc namespace: default resourceVersion: "162431" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/longhorn-simple-pvc uid: 0467ae73-22a5-4eba-803e-464cc0b9d975 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: longhorn volumeMode: Filesystem volumeName: pvc-0467ae73-22a5-4eba-803e-464cc0b9d975 status: accessModes: - ReadWriteOnce capacity: storage: 1Gi phase: BoundCopy the code

Through the Longhorn UI

If you are running Kubernetes V1.14 or V1.15, this is the only option for Longhorn Volume expansion.

Usage: On the Volume page of Longhorn UI, click Expand for the volume.

File system extension

Longhorn only tries to extend the file system when:

  • The size of the extension should be greater than the current size.
  • Longhorn volumeOne of theLinux filesystem.
  • The following file systems are used in Longhorn Volume:
    • ext4
    • XFS
  • Longhorn volumes use a block device front end.

Handling volume Recovery

If you restore a volume to a small-size snapshot, the front end of the volume remains the expanded size. However, the size of the file system will be the same as that of the restored snapshot. In this case, you need to handle the file system manually:

  1. Attach a volume to a random node.

  2. Log in to the node to expand the file system.

    If the file system is ext4, you may need to mount and umounted volumes before manually resize the file system. Otherwise, executing resize2FS may result in an error:

    resize2fs: Superblock checksum does not match superblock while trying to open ......
    Couldn't find valid filesystem superblock.
    Copy the code

    Follow these steps to resize the file system:

    mount /dev/longhorn/<volume name> <arbitrary mount directory>
    umount /dev/longhorn/<volume name>
    mount /dev/longhorn/<volume name> <arbitrary mount directory>
    resize2fs /dev/longhorn/<volume name>
    umount /dev/longhorn/<volume name>
    Copy the code
  3. If the file system is XFS, you can mount it directly and then extend the file system.

    mount /dev/longhorn/<volume name> <arbitrary mount directory>
    xfs_growfs <the mount directory>
    umount /dev/longhorn/<volume name>
    Copy the code

Expel disabled replicas on disks or nodes

Longhorn supports Auto eviction, which is designed to expel replicas on selected disabled disks or nodes to other appropriate disks and nodes. At the same time, maintain the same level of high availability during eviction.

Note: This expulsion function can only be enabled when scheduling is disabled for selected disks or nodes. And during eviction, the selected disks or nodes cannot be re-enabled for scheduling.

Note: This expulsion feature applies to Attached and Detached volumes. If the volume is “Detached,” Longhorn will automatically attach it before expulsion and automatically detach it after expulsion is complete.

By default, Eviction Requested for disks or nodes is false. In order to maintain the same level of high availability during eviction, Longhorn only evicts one copy of each volume after it has been successfully rebuilt.

Select the disk or node to expel

To expel a node’s disk,

  1. To travel toNodeTAB, select one of the nodes, and then select from the drop-down menuEdit Node and Disks.
  2. Make sure that scheduling is disabled for the disk and thatSchedulingSet toDisable.
  3. willEviction RequestedSet totrueAnd save.

To expel nodes,

  1. To travel toNodeTAB, select one of the nodes, and then select from the drop-down menuEdit Node.
  2. Ensure that the node has scheduling disabled and willSchedulingSet toDisable.
  3. willEviction RequestedSet totrueAnd then save.

Cancel disk or node expulsion

To disable Eviction of disks or nodes, please set the corresponding Eviction Requested to false.

Checking expulsion Status

Once eviction succeeds, the number of Replicas on the selected disk or node should be reduced to 0.

If you click Replicas number, it displays the replica name on this disk. When you click Replica Name, Longhorn UI will redirect the page to the corresponding volume page and display the volume status. If there is any error, such as: no space, or another schedulable disk cannot be found (scheduling failed), an error will be displayed. All errors are logged in the event log.

If any errors occur during the eviction, the eviction will be suspended until the new space is cleared or cancelled. If eviction is cancelled, the remaining copies on the selected disk or node remain on the disk or node.

Multi-disk support

Longhorn supports the use of multiple disks on nodes to store volume data.

By default, /var/lib/longhorn on the host is used to store volume data. You can avoid using the default directory by adding new disks, and then disabling /var/lib/longhorn scheduling.

To add a disk

To add a new disk to a Node, go to the Node TAB, select one of the nodes, and then select Edit Disks from the drop-down menu.

To add any additional disks, you need:

  1. Mount disks on the host to a directory.
  2. Add the path for mounting disks to the disk list of the node.

Longhorn will automatically detect storage information about the disk (for example, maximum space, available space) and begin scheduling it when it is likely to contain volumes. A path that does not allow an existing disk to load.

A certain amount of disk space can be reserved to prevent Longhorn from using it. It can be set in the disk’s Space Reserved field. Useful for undedicated storage disks on nodes.

Kubelet needs to maintain node stability when available computing resources are scarce. This is especially important when dealing with incompressible computing resources, such as memory or disk space. If these resources are exhausted, the node becomes unstable. To avoid Disk pressure problems when Kubelet schedules multiple volumes, Longhorn reserves 30% of the root Disk space (/var/lib/longhorn) by default to ensure node stability.

Note: because Longhorn uses filesystem ids to detect repeated mounts of the same filesystem, you cannot add disks on the same node with the same filesystem ID as existing disks. Details can be found at github.com/longhorn/lo…

Use alternate paths for disks on the node

If you do not want to use the original mount path of the disk on the node, you can use mount –bind to create an alternative/alias path for the disk and then use it with Longhorn. Note that the soft link ln -s will not work because it will not fill properly within the POD.

Longhorn will use Path to identify disks, so users need to ensure that the alternative path is properly installed when the node restarts, for example by adding it to fstab.

Remove the disk

Nodes and disks can be excluded from future scheduling. Note that if scheduling is disabled for a node, any scheduled storage space is not automatically freed.

To delete a disk, two conditions must be met:

  • Disk scheduling must be disabled
  • No existing copies of the disk are being used, including any that are in an error state.

Once these two conditions are met, you should be allowed to remove the disk.

configuration

There are two global Settings that affect volume scheduling.

  • StorageOverProvisioningPercentageDefines theScheduledStorage / (MaximumStorage - ReservedStorage)The upper limit. The default value is500(%). It means we can be in200 GiBTotal arranged on disk750 GiB Longhorn volumesAnd reserved for the root file system50G. Because people typically don’t use large amounts of data in volumes, we store volumes as sparse files (sparse files).
  • StorageMinimalAvailablePercentageDefines when more volumes cannot be allocated to a disk. The default value is10(%).MaximumStorage * StorageMinimalAvailablePercentage / 100MaximumStorage - ReservedStorageThe larger value between will be used to determine if the disk is underrunning and cannot accommodate more volumes.

Please note that currently there is no guarantee that space volume use no more than StorageMinimalAvailablePercentage, because:

  1. LonghornThe volume can be larger than the specified size because snapshots contain the old state of the volume.
  2. By default,LonghornWill overconfigure (over-provisioning).

Node Maintenance Guide

This section describes how to perform planned maintenance for a node.

  • Update the Node OS or Container Runtime
  • Update Kubernetes
  • Remove the disk
  • Remove node

Update the Node OS or Container Runtime

  1. Blocks nodes. Longhorn will automatically disable node scheduling when Kubernetes nodes are blocked.

  2. Empty the node to move the workload somewhere else.

    You will need to use the — ignore-daemonSets option to clear the node, because Longhorn deploys daemons such as Longhorn Manager, Longhorn CSI Plugin, engine Image.

    The replica process on the node will stop at this stage. The copy on the node is displayed as Failed.

    Note: By default, if there is the last healthy copy of a volume on the node, Longhorn will prevent the node from completing the Drain operation to protect the last copy and prevent workload interruptions. You can override the behavior in the Settings, or expel copies to other nodes before emptying.Copy the code

    The engine processes on the node migrate to other nodes with the Pod.

    Note: If there are volumes on the node that are not created by Kubernetes, Lognhorn prevents the node from completing the Drain operation to prevent potential workload interruptions.Copy the code

    After drain is complete, there should be no engines or replica processes running on the node. The two instance managers will still run on the nodes, but they will be stateless and will not interrupt the existing workload.

    Note: Usually you do not need to expel copies before the drain operation, as long as you have healthy copies on other nodes. Once the node is back online and unblocked, the replica can be reused later.Copy the code
  3. Perform necessary maintenance, including shutting down or restarting the node.

  4. Uncordon the node. Longhorn will automatically re-enable node scheduling.

    If there are existing copies on the node, Longhorn might use those copies to speed up the rebuilding process. You can set the Replica Replenishment Wait Interval setting to customize the amount of time the Longhorn should Wait for a potentially reusable copy to become available.

Update Kubernetes

Follow the official Kubernetes upgrade documentation.

  • ifLonghornFor the installation ofRancher catalog app, please followRancher’s Kubernetes Upgrade GuideupgradeKubernetes.

Remove the disk

To remove a disk:

  1. Disable disk scheduling.
  2. Expel all copies on the disk.
  3. Delete a disk.

Reuse node name

These steps also apply if you have replaced the node with the same node name. Once the new node is started, Longhorn will recognize that the disk is different. If the new node has the same name as the previous node, you need to remove the original disks and then add them back to the new node.

Remove nodes

To delete a node:

  1. Disable disk scheduling.

  2. Expel all copies on the node.

  3. Detach all volumes on the node.

    If the node is empty, all workloads should have been migrated to another node.

    If there are any other volumes that remain connected, detach them before continuing.

  4. Use Delete from the Node TAB to remove the Node from Longhorn.

    Alternatively, use the following command to remove the node from Kubernetes:

     kubectl delete node <node-name>
    Copy the code
  5. Longhorn automatically removes the node from the cluster.

Separation of volume

Close all Kubernetes Pods that use Longhorn volumes to separate volumes. The easiest way to achieve this goal is to delete all workloads and then recreate them after the upgrade. If this is not desirable, some workloads may be suspended.

In this section, you’ll learn how to modify each workload to turn off its POD.

Deployment

Edit deployment using kubectl edit deploy/

.

Replicas = 0.

StatefulSet

Use Kubectl Edit StatefulSet /

to edit statefulset.

Set .spec.replicas to 0.

DaemonSet

This workload cannot be paused.

Delete daemonset using kubectl delete ds/

.

Pod

Delete pod using kubectl delete pod/

.

Cannot suspend a POD that is not managed by the Workload Controller.

CronJob

Edit cronjob using Kubectl edit cronJob /

.

Set.spec.suspend to true.

Wait for any currently executing jobs to complete, or terminate them by removing the associated pods.

Job

Consider allowing single-run jobs to complete.

Otherwise, use kubectl delete job/

to delete the job.

ReplicaSet

Edit replicaset using Kubectl Edit Replicaset /

.

Replicas = 0.

ReplicationController

Edit ReplicationController using Kubectl Edit RC /

.

Replicas = 0.

Wait for the volume used by Kubernetes to complete detach.

Then detach all remaining volumes from Longhorn UI. These volumes are most likely created and attached outside of Kubernetes via the Longhorn UI or REST API.

scheduling

In this section, you’ll learn how Longhorn schedules copies based on a variety of factors.

Scheduling policy

Longhorn’s scheduling strategy has two phases. If the previous phase is satisfied, the scheduler only moves on to the next phase. Otherwise, the scheduling fails.

If any label is set for scheduling, the node label and disk label must match when selecting nodes or disks.

The first stage is the Node and zone selection stage. Longhorn will set filter nodes and areas according to Replica Node Level Soft anti-affinity and Replica Zone Level Soft anti-affinity.

The second stage is the Disk Selection stage. Longhorn will be based on Storage Minimal Available Percentage, Storage Over Provisioning Percentage And other disk-related factors, such as requested disk space, filter the disks that meet the first phase.

Node and region selection phase

First, If possible, Longhorn will always try to arrange new replicas on new nodes with new regions. In this context, “new” means that a copy of the volume is not yet scheduled to a region or node, and “existing” means that a copy is already scheduled for a node or region.

If the Replica Node Level Soft anti-affinity and Replica Zone Level Soft anti-affinity Settings are not selected, and if no new Node has a new Zone, Longhorn does not schedule copies.

Longhorn then looks for new nodes with existing regions. If possible, it schedules a new copy on a new node with an existing zone.

At this time, if Replica Node Level Soft anti-affinity is not selected and Replica Zone Level Soft anti-affinity is selected, and there is no new Node with existing partition, Longhorn will not schedule copies.

Finally, Longhorn will look for existing nodes with existing regions to schedule new copies. In this case, select Replica Node Level Soft anti-affinity and Replica Zone Level Soft anti-affinity.

Disk selection phase

Once the node and region phases are met, Longhorn decides whether replicas can be scheduled on the disk of the node. Longhorn checks the available disks with matching labels, total disk space, and available disk space on the selected node.

For example, after the Nodes and regions phase, Longhorn finds that Node A meets the requirements for scheduling replicas to nodes. Longhorn will check all available disks on this node.

Assume that the node has two disks: Disk X with 1 GB available space and Disk Y with 2 GB available space. And the replica Longhorn to be dispatched needs 1 GB. With the default Storage Minimal Available Percentage of 25, Longhorn can only schedule copies on Disk Y if the Disk Y matches the Disk label. Otherwise Longhorn will return failure on this copy selection. But if Storage Minimal Available Percentage is set to 0 and Disk X matches the Disk label, Longhorn can schedule copies on Disk X.