Este conteúdo não está disponível no idioma selecionado.

Chapter 5. Persistent storage using local storage


5.1. Local storage overview

You can use any of the following solutions to provision local storage:

  • HostPath Provisioner (HPP)
  • Local Storage Operator (LSO)
  • Logical Volume Manager (LVM) Storage
Warning

These solutions support provisioning only node-local storage. The workloads are bound to the nodes that provide the storage. If the node becomes unavailable, the workload also becomes unavailable. To maintain workload availability despite node failures, you must ensure storage data replication through active or passive replication mechanisms.

5.1.1. Overview of HostPath Provisioner functionality

You can perform the following actions using HostPath Provisioner (HPP):

  • Map the host filesystem paths to storage classes for provisioning local storage.
  • Statically create storage classes to configure filesystem paths on a node for storage consumption.
  • Statically provision Persistent Volumes (PVs) based on the storage class.
  • Create workloads and PersistentVolumeClaims (PVCs) while being aware of the underlying storage topology.
Note

HPP is available in upstream Kubernetes. However, it is not recommended to use HPP from upstream Kubernetes.

5.1.2. Overview of Local Storage Operator functionality

You can perform the following actions using Local Storage Operator (LSO):

  • Assign the storage devices (disks or partitions) to the storage classes without modifying the device configuration.
  • Statically provision PVs and storage classes by configuring the LocalVolume custom resource (CR).
  • Create workloads and PVCs while being aware of the underlying storage topology.
Note

LSO is developed and delivered by Red Hat.

5.1.3. Overview of LVM Storage functionality

You can perform the following actions using Logical Volume Manager (LVM) Storage:

  • Configure storage devices (disks or partitions) as lvm2 volume groups and expose the volume groups as storage classes.
  • Create workloads and request storage by using PVCs without considering the node topology.

LVM Storage uses the TopoLVM CSI driver to dynamically allocate storage space to the nodes in the topology and provision PVs.

Note

LVM Storage is developed and maintained by Red Hat. The CSI driver provided with LVM Storage is the upstream project "topolvm".

5.2. Persistent storage using local volumes

OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface.

Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.

Note

Local volumes can only be used as a statically created persistent volume.

5.2.1. Installing the Local Storage Operator

The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster.

Prerequisites

  • Access to the OpenShift Container Platform web console or command-line interface (CLI).

Procedure

  1. Create the openshift-local-storage project:

    $ oc adm new-project openshift-local-storage
    Copy to Clipboard Toggle word wrap
  2. Optional: Allow local storage creation on infrastructure nodes.

    You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring.

    You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes.

    To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command:

    $ oc annotate namespace openshift-local-storage openshift.io/node-selector=''
    Copy to Clipboard Toggle word wrap
  3. Optional: Allow local storage to run on the management pool of CPUs in single-node deployment.

    Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the management pool. Perform this step on single-node installations that use management workload partitioning.

    To allow Local Storage Operator to run on the management CPU pool, run following commands:

    $ oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'
    Copy to Clipboard Toggle word wrap

From the UI

To install the Local Storage Operator from the web console, follow these steps:

  1. Log in to the OpenShift Container Platform web console.
  2. Navigate to Operators OperatorHub.
  3. Type Local Storage into the filter box to locate the Local Storage Operator.
  4. Click Install.
  5. On the Install Operator page, select A specific namespace on the cluster. Select openshift-local-storage from the drop-down menu.
  6. Adjust the values for Update Channel and Approval Strategy to the values that you want.
  7. Click Install.

Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console.

From the CLI

  1. Install the Local Storage Operator from the CLI.

    1. Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml:

      Example openshift-local-storage.yaml

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: local-operator-group
        namespace: openshift-local-storage
      spec:
        targetNamespaces:
          - openshift-local-storage
      ---
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: local-storage-operator
        namespace: openshift-local-storage
      spec:
        channel: stable
        installPlanApproval: Automatic 
      1
      
        name: local-storage-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      Copy to Clipboard Toggle word wrap

      1
      The user approval policy for an install plan.
  2. Create the Local Storage Operator object by entering the following command:

    $ oc apply -f openshift-local-storage.yaml
    Copy to Clipboard Toggle word wrap

    At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

  3. Verify local storage installation by checking that all pods and the Local Storage Operator have been created:

    1. Check that all the required pods have been created:

      $ oc -n openshift-local-storage get pods
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                                      READY   STATUS    RESTARTS   AGE
      local-storage-operator-746bf599c9-vlt5t   1/1     Running   0          19m
      Copy to Clipboard Toggle word wrap

    2. Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project:

      $ oc get csvs -n openshift-local-storage
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                                         DISPLAY         VERSION               REPLACES   PHASE
      local-storage-operator.4.2.26-202003230335   Local Storage   4.2.26-202003230335              Succeeded
      Copy to Clipboard Toggle word wrap

After all checks have passed, the Local Storage Operator is installed successfully.

5.2.2. Provisioning local volumes by using the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Prerequisites

  • The Local Storage Operator is installed.
  • You have a local disk that meets the following conditions:

    • It is attached to a node.
    • It is not mounted.
    • It does not contain partitions.

Procedure

  1. Create the local volume resource. This resource must define the nodes and paths to the local volumes.

    Note

    Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs).

    Example: Filesystem

    apiVersion: "local.storage.openshift.io/v1"
    kind: "LocalVolume"
    metadata:
      name: "local-disks"
      namespace: "openshift-local-storage" 
    1
    
    spec:
      nodeSelector: 
    2
    
        nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-140-183
              - ip-10-0-158-139
              - ip-10-0-164-33
      storageClassDevices:
        - storageClassName: "local-sc" 
    3
    
          volumeMode: Filesystem 
    4
    
          fsType: xfs 
    5
    
          devicePaths: 
    6
    
            - /path/to/device 
    7
    Copy to Clipboard Toggle word wrap

    1
    The namespace where the Local Storage Operator is installed.
    2
    Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3
    The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
    4
    The volume mode, either Filesystem or Block, that defines the type of local volumes.
    Note

    A raw block volume (volumeMode: Block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    5
    The file system that is created when the local volume is mounted for the first time.
    6
    The path containing a list of local storage devices to choose from.
    7
    Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as /dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
    Note

    If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

    Example: Block

    apiVersion: "local.storage.openshift.io/v1"
    kind: "LocalVolume"
    metadata:
      name: "local-disks"
      namespace: "openshift-local-storage" 
    1
    
    spec:
      nodeSelector: 
    2
    
        nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-136-143
              - ip-10-0-140-255
              - ip-10-0-144-180
      storageClassDevices:
        - storageClassName: "local-sc" 
    3
    
          volumeMode: Block 
    4
    
          devicePaths: 
    5
    
            - /path/to/device 
    6
    Copy to Clipboard Toggle word wrap

    1
    The namespace where the Local Storage Operator is installed.
    2
    Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3
    The name of the storage class to use when creating persistent volume objects.
    4
    The volume mode, either Filesystem or Block, that defines the type of local volumes.
    5
    The path containing a list of local storage devices to choose from.
    6
    Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
    Note

    If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

  2. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created:

    $ oc create -f <local-volume>.yaml
    Copy to Clipboard Toggle word wrap
  3. Verify that the provisioner was created and that the corresponding daemon sets were created:

    $ oc get all -n openshift-local-storage
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                          READY   STATUS    RESTARTS   AGE
    pod/diskmaker-manager-9wzms                   1/1     Running   0          5m43s
    pod/diskmaker-manager-jgvjp                   1/1     Running   0          5m43s
    pod/diskmaker-manager-tbdsj                   1/1     Running   0          5m43s
    pod/local-storage-operator-7db4bd9f79-t6k87   1/1     Running   0          14m
    
    NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
    service/local-storage-operator-metrics   ClusterIP   172.30.135.36   <none>        8383/TCP,8686/TCP   14m
    
    NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/diskmaker-manager   3         3         3       3            3           <none>          5m43s
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/local-storage-operator   1/1     1            1           14m
    
    NAME                                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/local-storage-operator-7db4bd9f79   1         1         1       14m
    Copy to Clipboard Toggle word wrap

    Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid.

  4. Verify that the persistent volumes were created:

    $ oc get pv
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    local-pv-1cec77cf   100Gi      RWO            Delete           Available           local-sc                88m
    local-pv-2ef7cd2a   100Gi      RWO            Delete           Available           local-sc                82m
    local-pv-3fa1c73    100Gi      RWO            Delete           Available           local-sc                48m
    Copy to Clipboard Toggle word wrap

Important

Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation.

5.2.3. Provisioning local volumes without the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Important

Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs.

Prerequisites

  • Local disks are attached to the OpenShift Container Platform nodes.

Procedure

  1. Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml, with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes.

    Note

    Do not use different storage class names for the same device. Doing so will create multiple PVs.

    example-pv-filesystem.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-pv-filesystem
    spec:
      capacity:
        storage: 100Gi
      volumeMode: Filesystem 
    1
    
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-sc 
    2
    
      local:
        path: /dev/xvdf 
    3
    
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - example-node
    Copy to Clipboard Toggle word wrap

    1
    The volume mode, either Filesystem or Block, that defines the type of PVs.
    2
    The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs.
    3
    The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode.
    Note

    A raw block volume (volumeMode: block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    example-pv-block.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-pv-block
    spec:
      capacity:
        storage: 100Gi
      volumeMode: Block 
    1
    
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-sc 
    2
    
      local:
        path: /dev/xvdf 
    3
    
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - example-node
    Copy to Clipboard Toggle word wrap

    1
    The volume mode, either Filesystem or Block, that defines the type of PVs.
    2
    The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs.
    3
    The path containing a list of local storage devices to choose from.
  2. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created:

    $ oc create -f <example-pv>.yaml
    Copy to Clipboard Toggle word wrap
  3. Verify that the local PV was created:

    $ oc get pv
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                STORAGECLASS    REASON   AGE
    example-pv-filesystem   100Gi      RWO            Delete           Available                        local-sc            3m47s
    example-pv1             1Gi        RWO            Delete           Bound       local-storage/pvc1   local-sc            12h
    example-pv2             1Gi        RWO            Delete           Bound       local-storage/pvc2   local-sc            12h
    example-pv3             1Gi        RWO            Delete           Bound       local-storage/pvc3   local-sc            12h
    Copy to Clipboard Toggle word wrap

5.2.4. Creating the local volume persistent volume claim

Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod.

Prerequisites

  • Persistent volumes have been created using the local volume provisioner.

Procedure

  1. Create the PVC using the corresponding storage class:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: local-pvc-name 
    1
    
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem 
    2
    
      resources:
        requests:
          storage: 100Gi 
    3
    
      storageClassName: local-sc 
    4
    Copy to Clipboard Toggle word wrap
    1
    Name of the PVC.
    2
    The type of the PVC. Defaults to Filesystem.
    3
    The amount of storage available to the PVC.
    4
    Name of the storage class required by the claim.
  2. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created:

    $ oc create -f <local-pvc>.yaml
    Copy to Clipboard Toggle word wrap

5.2.5. Attach the local claim

After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource.

Prerequisites

  • A persistent volume claim exists in the same namespace.

Procedure

  1. Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod:

    apiVersion: v1
    kind: Pod
    spec:
    # ...
      containers:
        volumeMounts:
        - name: local-disks 
    1
    
          mountPath: /data 
    2
    
      volumes:
      - name: local-disks
        persistentVolumeClaim:
          claimName: local-pvc-name 
    3
    
    # ...
    Copy to Clipboard Toggle word wrap
    1
    The name of the volume to mount.
    2
    The path inside the pod where the volume is mounted. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    3
    The name of the existing persistent volume claim to use.
  2. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created:

    $ oc create -f <local-pod>.yaml
    Copy to Clipboard Toggle word wrap

5.2.6. Automating discovery and provisioning for local storage devices

The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices.

Important

Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Important

Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment.

Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices.

Warning

Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet that target a node more than once is not supported.

Prerequisites

  • You have cluster administrator permissions.
  • You have installed the Local Storage Operator.
  • You have attached local disks to OpenShift Container Platform nodes.
  • You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI).

Procedure

  1. To enable automatic discovery of local devices from the web console:

    1. Click Operators Installed Operators.
    2. In the openshift-local-storage namespace, click Local Storage.
    3. Click the Local Volume Discovery tab.
    4. Click Create Local Volume Discovery and then select either Form view or YAML view.
    5. Configure the LocalVolumeDiscovery object parameters.
    6. Click Create.

      The Local Storage Operator creates a local volume discovery instance named auto-discover-devices.

  2. To display a continuous list of available devices on a node:

    1. Log in to the OpenShift Container Platform web console.
    2. Navigate to Compute Nodes.
    3. Click the node name that you want to open. The "Node Details" page is displayed.
    4. Select the Disks tab to display the list of the selected devices.

      The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode.

  3. To automatically provision local volumes for the discovered devices from the web console:

    1. Navigate to Operators Installed Operators and select Local Storage from the list of Operators.
    2. Select Local Volume Set Create Local Volume Set.
    3. Enter a volume set name and a storage class name.
    4. Choose All nodes or Select nodes to apply filters accordingly.

      Note

      Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes.

    5. Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create.

      A message displays after several minutes, indicating that the "Operator reconciled successfully."

  4. Alternatively, to provision local volumes for the discovered devices from the CLI:

    1. Create an object YAML file to define the local volume set, such as local-volume-set.yaml, as shown in the following example:

      apiVersion: local.storage.openshift.io/v1alpha1
      kind: LocalVolumeSet
      metadata:
        name: example-autodetect
      spec:
        nodeSelector:
          nodeSelectorTerms:
            - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                    - worker-0
                    - worker-1
        storageClassName: local-sc 
      1
      
        volumeMode: Filesystem
        fsType: ext4
        maxDeviceCount: 10
        deviceInclusionSpec:
          deviceTypes: 
      2
      
            - disk
            - part
          deviceMechanicalProperties:
            - NonRotational
          minSize: 10G
          maxSize: 100G
          models:
            - SAMSUNG
            - Crucial_CT525MX3
          vendors:
            - ATA
            - ST2000LM
      Copy to Clipboard Toggle word wrap
      1
      Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
      2
      When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices.
    2. Create the local volume set object:

      $ oc apply -f local-volume-set.yaml
      Copy to Clipboard Toggle word wrap
    3. Verify that the local persistent volumes were dynamically provisioned based on the storage class:

      $ oc get pv
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
      local-pv-1cec77cf   100Gi      RWO            Delete           Available           local-sc                88m
      local-pv-2ef7cd2a   100Gi      RWO            Delete           Available           local-sc                82m
      local-pv-3fa1c73    100Gi      RWO            Delete           Available           local-sc                48m
      Copy to Clipboard Toggle word wrap

Note

Results are deleted after they are removed from the node. Symlinks must be manually removed.

5.2.7. Using tolerations with Local Storage Operator pods

Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes.

You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node.

Important

Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect. An operator allows you to leave one of these parameters empty.

Prerequisites

  • The Local Storage Operator is installed.
  • Local disks are attached to OpenShift Container Platform nodes with a taint.
  • Tainted nodes are expected to provision local storage.

Procedure

To configure local volumes for scheduling on tainted nodes:

  1. Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example:

      apiVersion: "local.storage.openshift.io/v1"
      kind: "LocalVolume"
      metadata:
        name: "local-disks"
        namespace: "openshift-local-storage"
      spec:
        tolerations:
          - key: localstorage 
    1
    
            operator: Equal 
    2
    
            value: "localstorage" 
    3
    
        storageClassDevices:
            - storageClassName: "local-sc"
              volumeMode: Block 
    4
    
              devicePaths: 
    5
    
                - /dev/xvdg
    Copy to Clipboard Toggle word wrap
    1
    Specify the key that you added to the node.
    2
    Specify the Equal operator to require the key/value parameters to match. If operator is Exists, the system checks that the key exists and ignores the value. If operator is Equal, then the key and value must match.
    3
    Specify the value local of the tainted node.
    4
    The volume mode, either Filesystem or Block, defining the type of the local volumes.
    5
    The path containing a list of local storage devices to choose from.
  2. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example:

    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
    Copy to Clipboard Toggle word wrap

The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints.

5.2.8. Local Storage Operator Metrics

OpenShift Container Platform provides the following metrics for the Local Storage Operator:

  • lso_discovery_disk_count: total number of discovered devices on each node
  • lso_lvset_provisioned_PV_count: total number of PVs created by LocalVolumeSet objects
  • lso_lvset_unmatched_disk_count: total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria
  • lso_lvset_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolumeSet object criteria
  • lso_lv_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolume object criteria
  • lso_lv_provisioned_PV_count: total number of provisioned PVs for LocalVolume

To use these metrics, be sure to:

  • Enable support for monitoring when installing the Local Storage Operator.
  • When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace.

5.2.9. Deleting the Local Storage Operator resources

5.2.9.1. Removing a local volume or local volume set

Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed.

Note

The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource.

Prerequisites

  • The persistent volume must be in a Released or Available state.

    Warning

    Deleting a persistent volume that is still in use can result in data loss or corruption.

Procedure

  1. Edit the previously created local volume to remove any unwanted disks.

    1. Edit the cluster resource:

      $ oc edit localvolume <local_volume_name> -n openshift-local-storage
      Copy to Clipboard Toggle word wrap
    2. Navigate to the lines under devicePaths, and delete any representing unwanted disks.
  2. Delete any persistent volumes created.

    $ oc delete pv <pv_name>
    Copy to Clipboard Toggle word wrap
  3. Delete directory and included symlinks on the node.

    Warning

    The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability.

    $ oc debug node/<node_name> -- chroot /host rm -rf /mnt/local-storage/<sc_name> 
    1
    Copy to Clipboard Toggle word wrap
    1
    The name of the storage class used to create the local volumes.

5.2.9.2. Uninstalling the Local Storage Operator

To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project.

Warning

Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator’s removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources.

Prerequisites

  • Access to the OpenShift Container Platform web console.

Procedure

  1. Delete any local volume resources installed in the project, such as localvolume, localvolumeset, and localvolumediscovery by running the following commands:

    $ oc delete localvolume --all --all-namespaces
    Copy to Clipboard Toggle word wrap
    $ oc delete localvolumeset --all --all-namespaces
    Copy to Clipboard Toggle word wrap
    $ oc delete localvolumediscovery --all --all-namespaces
    Copy to Clipboard Toggle word wrap
  2. Uninstall the Local Storage Operator from the web console.

    1. Log in to the OpenShift Container Platform web console.
    2. Navigate to Operators Installed Operators.
    3. Type Local Storage into the filter box to locate the Local Storage Operator.
    4. Click the Options menu kebab at the end of the Local Storage Operator.
    5. Click Uninstall Operator.
    6. Click Remove in the window that appears.
  3. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command:

    $ oc delete pv <pv-name>
    Copy to Clipboard Toggle word wrap
  4. Delete the openshift-local-storage project by running the following command:

    $ oc delete project openshift-local-storage
    Copy to Clipboard Toggle word wrap

5.3. Persistent storage using hostPath

A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node’s filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.

Important

The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node.

5.3.1. Overview

OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster.

In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning.

A hostPath volume must be provisioned statically.

Important

Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host. The following example shows the / directory from the host being mounted into the container at /host.

apiVersion: v1
kind: Pod
metadata:
  name: test-host-mount
spec:
  containers:
  - image: registry.access.redhat.com/ubi8/ubi
    name: test-container
    command: ['sh', '-c', 'sleep 3600']
    volumeMounts:
    - mountPath: /host
      name: host-slash
  volumes:
   - name: host-slash
     hostPath:
       path: /
       type: ''
Copy to Clipboard Toggle word wrap

5.3.2. Statically provisioning hostPath volumes

A pod that uses a hostPath volume must be referenced by manual (static) provisioning.

Procedure

  1. Define the persistent volume (PV) by creating a pv.yaml file with the PersistentVolume object definition:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: task-pv-volume 
    1
    
      labels:
        type: local
    spec:
      storageClassName: manual 
    2
    
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce 
    3
    
      persistentVolumeReclaimPolicy: Retain
      hostPath:
        path: "/mnt/data" 
    4
    Copy to Clipboard Toggle word wrap
    1
    The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods.
    2
    Used to bind persistent volume claim (PVC) requests to the PV.
    3
    The volume can be mounted as read-write by a single node.
    4
    The configuration file specifies that the volume is at /mnt/data on the cluster’s node. To avoid corrupting your host system, do not mount to the container root, /, or any path that is the same in the host and the container. You can safely mount the host by using /host
  2. Create the PV from the file:

    $ oc create -f pv.yaml
    Copy to Clipboard Toggle word wrap
  3. Define the PVC by creating a pvc.yaml file with the PersistentVolumeClaim object definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: task-pvc-volume
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: manual
    Copy to Clipboard Toggle word wrap
  4. Create the PVC from the file:

    $ oc create -f pvc.yaml
    Copy to Clipboard Toggle word wrap

5.3.3. Mounting the hostPath share in a privileged pod

After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.

Prerequisites

  • A persistent volume claim exists that is mapped to the underlying hostPath share.

Procedure

  • Create a privileged pod that mounts the existing persistent volume claim:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-name 
    1
    
    spec:
      containers:
        ...
        securityContext:
          privileged: true 
    2
    
        volumeMounts:
        - mountPath: /data 
    3
    
          name: hostpath-privileged
      ...
      securityContext: {}
      volumes:
        - name: hostpath-privileged
          persistentVolumeClaim:
            claimName: task-pvc-volume 
    4
    Copy to Clipboard Toggle word wrap
    1
    The name of the pod.
    2
    The pod must run as privileged to access the node’s storage.
    3
    The path to mount the host path share inside the privileged pod. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    4
    The name of the PersistentVolumeClaim object that has been previously created.

5.4. Persistent storage using Logical Volume Manager Storage

Logical Volume Manager (LVM) Storage uses the TopoLVM CSI driver to dynamically provision local storage on single-node OpenShift clusters.

LVM Storage creates thin-provisioned volumes using Logical Volume Manager and provides dynamic provisioning of block storage on a limited resources single-node OpenShift cluster.

You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.

5.4.1. Logical Volume Manager Storage installation

You can install Logical Volume Manager (LVM) Storage on a single-node OpenShift cluster and configure it to dynamically provision storage for your workloads.

You can deploy LVM Storage on single-node OpenShift clusters by using the OpenShift Container Platform CLI (oc), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM).

5.4.1.1. Prerequisites to install LVM Storage

The prerequisites to install LVM Storage are as follows:

  • Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.
  • Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
  • Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.

    Note

    You cannot wipe the disks that are in use.

  • If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. See the Installing LVM Storage using RHACM section.

5.4.1.2. Installing LVM Storage using OpenShift Container Platform Web Console

You can install LVM Storage using the Red Hat OpenShift Container Platform OperatorHub.

Prerequisites

  • You have access to the single-node OpenShift cluster.
  • You are using an account with the cluster-admin and Operator installation permissions.

Procedure

  1. Log in to the OpenShift Container Platform Web Console.
  2. Click Operators OperatorHub.
  3. Scroll or type LVM Storage into the Filter by keyword box to find LVM Storage.
  4. Click Install.
  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.12.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If the openshift-storage namespace does not exist, it is created during the operator installation.
    4. Approval Strategy as Automatic or Manual.

      If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

      If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

  6. Click Install.

Verification steps

  • Verify that LVM Storage shows a green tick, indicating successful installation.

5.4.1.3. Installing LVM Storage using RHACM

LVM Storage is deployed on single-node OpenShift clusters using Red Hat Advanced Cluster Management (RHACM). You create a Policy object on RHACM that deploys and configures the Operator when it is applied to managed clusters which match the selector specified in the PlacementRule resource. The policy is also applied to clusters that are imported later and satisfy the placement rule.

Prerequisites

  • Access to the RHACM cluster using an account with cluster-admin and Operator installation permissions.
  • Dedicated disks on each single-node OpenShift cluster to be used by LVM Storage.
  • The single-node OpenShift cluster needs to be managed by RHACM, either imported or created.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Create a namespace in which you will create policies.

    # oc create ns lvms-policy-ns
    Copy to Clipboard Toggle word wrap
  3. To create a policy, save the following YAML to a file with a name such as policy-lvms-operator.yaml:

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-install-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector: 
    1
    
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-install-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-install-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: install-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: install-lvms
    spec:
      disabled: false
      remediationAction: enforce
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: install-lvms
          spec:
            object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: v1
                kind: Namespace
                metadata:
                  labels:
                    openshift.io/cluster-monitoring: "true"
                    pod-security.kubernetes.io/enforce: privileged
                    pod-security.kubernetes.io/audit: privileged
                    pod-security.kubernetes.io/warn: privileged
                  name: openshift-storage
            - complianceType: musthave
              objectDefinition:
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: musthave
              objectDefinition:
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms
                  namespace: openshift-storage
                spec:
                  installPlanApproval: Automatic
                  name: lvms-operator
                  source: redhat-operators
                  sourceNamespace: openshift-marketplace
            remediationAction: enforce
            severity: low
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: lvms
          spec:
            object-templates:
               - complianceType: musthave
                 objectDefinition:
                   apiVersion: lvm.topolvm.io/v1alpha1
                   kind: LVMCluster
                   metadata:
                     name: my-lvmcluster
                     namespace: openshift-storage
                   spec:
                     storage:
                       deviceClasses:
                       - name: vg1
                         deviceSelector: 
    2
    
                           paths:
                           - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
                           - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
                         thinPoolConfig:
                           name: thin-pool-1
                           sizePercent: 90
                           overprovisionRatio: 10
                         nodeSelector: 
    3
    
                           nodeSelectorTerms:
                           - matchExpressions:
                               - key: app
                                 operator: In
                                 values:
                                 - test1
            remediationAction: enforce
            severity: low
    Copy to Clipboard Toggle word wrap
    1
    Replace the key and value in PlacementRule.spec.clusterSelector to match the labels set on the single-node OpenShift clusters on which you want to install LVM Storage.
    2
    To control or restrict the volume group to your preferred disks, you can manually specify the local paths of the disks in the deviceSelector section of the LVMCluster YAML.
    3
    To add a node filter, which is a subset of the additional worker nodes, specify the required filter in the nodeSelector section. LVM Storage detects and uses the additional worker nodes when the new nodes show up.
    Important

    This nodeSelector node filter matching is not the same as the pod label matching.

  4. Create the policy in the namespace by running the following command:

    # oc create -f policy-lvms-operator.yaml -n lvms-policy-ns 
    1
    Copy to Clipboard Toggle word wrap
    1
    The policy-lvms-operator.yaml is the name of the file to which the policy is saved.

    This creates a Policy, a PlacementRule, and a PlacementBinding object in the lvms-policy-ns namespace. The policy creates a Namespace, OperatorGroup, Subscription, and LVMCluster resource on the clusters that match the placement rule. This deploys the Operator on the single-node OpenShift clusters which match the selection criteria and configures it to set up the required resources to provision storage. The Operator uses all the disks specified in the LVMCluster CR. If no disks are specified, the Operator uses all the unused disks on the single-node OpenShift node.

    Important

    After a device is added to the LVMCluster, it cannot be removed.

5.4.1.4. Limitations to configure the size of the devices used in LVM Storage

The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:

  • The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
  • The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).

    • You can define the size of PE and LE during the physical and logical device creation.
    • The default PE and LE size is 4 MB.
    • If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
Expand
Table 5.1. Size limits for different architectures using the default PE and LE size
ArchitectureRHEL 6RHEL 7RHEL 8RHEL 9

32-bit

16 TB

-

-

-

64-bit

8 EB [1]

100 TB [2]

8 EB [1]

500 TB [2]

8 EB

8 EB

  1. Theoretical size.
  2. Tested size.

5.4.2. Provisioning storage using LVM Storage

You can provision persistent volume claims (PVCs) using the storage class that is created during the Operator installation. You can provision block and file PVCs, however, the storage is allocated only when a pod that uses the PVC is created.

Note

LVM Storage provisions PVCs in units of 1 GiB. The requested storage is rounded up to the nearest GiB.

Procedure

  1. Identify the StorageClass that is created when LVM Storage is deployed.

    The StorageClass name is in the format, lvms-<device-class-name>. The device-class-name is the name of the device class that you provided in the LVMCluster of the Policy YAML. For example, if the deviceClass is called vg1, then the storageClass name is lvms-vg1.

    The volumeBindingMode of the storage class is set to WaitForFirstConsumer.

  2. To create a PVC where the application requires storage, save the following YAML to a file with a name such as pvc.yaml.

    Example YAML to create a PVC

    # block pvc
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-block-1
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Block
      resources:
        requests:
          storage: 10Gi
      storageClassName: lvms-vg1
    ---
    # file pvc
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-file-1
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Filesystem
      resources:
        requests:
          storage: 10Gi
      storageClassName: lvms-vg1
    Copy to Clipboard Toggle word wrap

  3. Create the PVC by running the following command:

    # oc create -f pvc.yaml -ns <application_namespace>
    Copy to Clipboard Toggle word wrap

    The created PVCs remain in pending state until you deploy the pods that use them.

5.4.3. Expanding PVCs

To leverage the new storage after adding additional capacity, you can expand existing persistent volume claims (PVCs) with LVM Storage.

Prerequisites

  • Dynamic provisioning is used.
  • The controlling StorageClass object has allowVolumeExpansion set to true.

Procedure

  1. Modify the .spec.resources.requests.storage field in the desired PVC resource to the new size by running the following command:

    oc patch <pvc_name> -n <application_namespace> -p '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}'
    Copy to Clipboard Toggle word wrap
  2. Watch the status.conditions field of the PVC to see if the resize has completed. OpenShift Container Platform adds the Resizing condition to the PVC during expansion, which is removed after the expansion completes.

5.4.4. Upgrading LVM Storage on single-node OpenShift clusters

Currently, it is not possible to upgrade from OpenShift Data Foundation Logical Volume Manager Operator 4.11 to LVM Storage 4.12 on single-node OpenShift clusters.

Important

The data will not be preserved during this process.

Procedure

  1. Back up any data that you want to preserve on the persistent volume claims (PVCs).
  2. Delete all PVCs provisioned by the OpenShift Data Foundation Logical Volume Manager Operator and their pods.
  3. Reinstall LVM Storage on OpenShift Container Platform 4.12.
  4. Recreate the workloads.
  5. Copy the backup data to the PVCs created after upgrading to 4.12.

5.4.5. Volume snapshots for single-node OpenShift

You can take volume snapshots of persistent volumes (PVs) that are provisioned by LVM Storage. You can also create volume snapshots of the cloned volumes. Volume snapshots help you to do the following:

  • Back up your application data.

    Important

    Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you need to move the snapshots to a secure location. You can use OpenShift API for Data Protection backup and restore solutions.

  • Revert to a state at which the volume snapshot was taken.

5.4.5.1. Creating volume snapshots in single-node OpenShift

You can create volume snapshots based on the available capacity of the thin pool and the overprovisioning limits. LVM Storage creates a VolumeSnapshotClass with the lvms-<deviceclass-name> name.

Prerequisites

  • You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot.
  • You stopped all the I/O to the PVC before taking the snapshot.

Procedure

  1. Log in to the single-node OpenShift for which you need to run the oc command.
  2. Save the following YAML to a file with a name such as lvms-vol-snapshot.yaml.

    Example YAML to create a volume snapshot

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
        name: lvm-block-1-snap
    spec:
        volumeSnapshotClassName: lvms-vg1
        source:
            persistentVolumeClaimName: lvm-block-1
    Copy to Clipboard Toggle word wrap

  3. Create the snapshot by running the following command in the same namespace as the PVC:

    # oc create -f lvms-vol-snapshot.yaml
    Copy to Clipboard Toggle word wrap

A read-only copy of the PVC is created as a volume snapshot.

5.4.5.2. Restoring volume snapshots in single-node OpenShift

When you restore a volume snapshot, a new persistent volume claim (PVC) is created. The restored PVC is independent of the volume snapshot and the source PVC.

Prerequisites

  • The storage class must be the same as that of the source PVC.
  • The size of the requested PVC must be the same as that of the source volume of the snapshot.

    Important

    A snapshot must be restored to a PVC of the same size as the source volume of the snapshot. If a larger PVC is required, you can resize the PVC after the snapshot is restored successfully.

Procedure

  1. Identify the storage class name of the source PVC and volume snapshot name.
  2. Save the following YAML to a file with a name such as lvms-vol-restore.yaml to restore the snapshot.

    Example YAML to restore a PVC.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: lvm-block-1-restore
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Block
      Resources:
        Requests:
          storage: 2Gi
      storageClassName: lvms-vg1
      dataSource:
        name: lvm-block-1-snap
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
    Copy to Clipboard Toggle word wrap

  3. Create the policy by running the following command in the same namespace as the snapshot:

    # oc create -f lvms-vol-restore.yaml
    Copy to Clipboard Toggle word wrap

5.4.5.3. Deleting volume snapshots in single-node OpenShift

You can delete volume snapshots resources and persistent volume claims (PVCs).

Procedure

  1. Delete the volume snapshot resource by running the following command:

    # oc delete volumesnapshot <volume_snapshot_name> -n <namespace>
    Copy to Clipboard Toggle word wrap
    Note

    When you delete a persistent volume claim (PVC), the snapshots of the PVC are not deleted.

  2. To delete the restored volume snapshot, delete the PVC that was created to restore the volume snapshot by running the following command:

    # oc delete pvc <pvc_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

5.4.6. Volume cloning for single-node OpenShift

A clone is a duplicate of an existing storage volume that can be used like any standard volume.

5.4.6.1. Creating volume clones in single-node OpenShift

You create a clone of a volume to make a point-in-time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size.

Important

The cloned PVC has write access.

Prerequisites

  • You ensured that the PVC is in Bound state. This is required for a consistent snapshot.
  • You ensured that the StorageClass is the same as that of the source PVC.

Procedure

  1. Identify the storage class of the source PVC.
  2. To create a volume clone, save the following YAML to a file with a name such as lvms-vol-clone.yaml:

    Example YAML to clone a volume

    apiVersion: v1
    kind: PersistentVolumeClaim
    Metadata:
      name: lvm-block-1-clone
    Spec:
      storageClassName: lvms-vg1
      dataSource:
        name: lvm-block-1
        kind: PersistentVolumeClaim
      accessModes:
       - ReadWriteOnce
      volumeMode: Block
      Resources:
        Requests:
          storage: 2Gi
    Copy to Clipboard Toggle word wrap

  3. Create the policy in the same namespace as the source PVC by running the following command:

    # oc create -f lvms-vol-clone.yaml
    Copy to Clipboard Toggle word wrap

5.4.6.2. Deleting cloned volumes in single-node OpenShift

You can delete cloned volumes.

Procedure

  • To delete the cloned volume, delete the cloned PVC by running the following command:

    # oc delete pvc <clone_pvc_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

5.4.7. Monitoring LVM Storage

To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:

openshift.io/cluster-monitoring=true
Copy to Clipboard Toggle word wrap
Important

For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics.

5.4.7.1. Metrics

You can monitor LVM Storage by viewing the metrics.

The following table describes the topolvm metrics:

Expand
Table 5.2. topolvm metrics
AlertDescription

topolvm_thinpool_data_percent

Indicates the percentage of data space used in the LVM thinpool.

topolvm_thinpool_metadata_percent

Indicates the percentage of metadata space used in the LVM thinpool.

topolvm_thinpool_size_bytes

Indicates the size of the LVM thin pool in bytes.

topolvm_volumegroup_available_bytes

Indicates the available space in the LVM volume group in bytes.

topolvm_volumegroup_size_bytes

Indicates the size of the LVM volume group in bytes.

topolvm_thinpool_overprovisioned_available

Indicates the available over-provisioned size of the LVM thin pool in bytes.

Note

Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.

5.4.7.2. Alerts

When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.

LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:

Expand
Table 5.3. LVM Storage alerts
AlertDescription

VolumeGroupUsageAtThresholdNearFull

This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required.

VolumeGroupUsageAtThresholdCritical

This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required.

ThinPoolDataUsageAtThresholdNearFull

This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolDataUsageAtThresholdCritical

This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdNearFull

This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdCritical

This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

5.4.8. Downloading log files and diagnostic information using must-gather

When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.

Procedure

  • Run the must-gather command from the client connected to the LVM Storage cluster:

    $ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.12 --dest-dir=<directory_name>
    Copy to Clipboard Toggle word wrap

5.4.8.1. Investigating a PVC stuck in the Pending state

A persistent volume claim (PVC) can get stuck in a Pending state for a number of reasons. For example:

  • Insufficient computing resources
  • Network problems
  • Mismatched storage class or node selector
  • No available volumes
  • The node with the persistent volume (PV) is in a Not Ready state

Identify the cause by using the oc describe command to review details about the stuck PVC.

Procedure

  1. Retrieve the list of PVCs by running the following command:

    $ oc get pvc
    Copy to Clipboard Toggle word wrap

    Example output

    NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvms-test   Pending                                      lvms-vg1       11s
    Copy to Clipboard Toggle word wrap

  2. Inspect the events associated with a PVC stuck in the Pending state by running the following command:

    $ oc describe pvc <pvc_name> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <pvc_name> with the name of the PVC. For example, lvms-vg1.

    Example output

    Type     Reason              Age               From                         Message
    ----     ------              ----              ----                         -------
    Warning  ProvisioningFailed  4s (x2 over 17s)  persistentvolume-controller  storageclass.storage.k8s.io "lvms-vg1" not found
    Copy to Clipboard Toggle word wrap

5.4.8.2. Recovering from missing LVMS or Operator components

If you encounter a storage class "not found" error, check the LVMCluster resource and ensure that all the logical volume manager storage (LVMS) pods are running. You can create an LVMCluster resource if it does not exist.

Procedure

  1. Verify the presence of the LVMCluster resource by running the following command:

    $ oc get lvmcluster -n openshift-storage
    Copy to Clipboard Toggle word wrap

    Example output

    NAME            AGE
    my-lvmcluster   65m
    Copy to Clipboard Toggle word wrap

  2. If the cluster doesn’t have an LVMCluster resource, create one by running the following command:

    $ oc create -n openshift-storage -f <custom_resource> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <custom_resource> with a custom resource URL or file tailored to your requirements.

    Example custom resource

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
      storage:
        deviceClasses:
        - name: vg1
          default: true
          thinPoolConfig:
            name: thin-pool-1
            sizePercent: 90
            overprovisionRatio: 10
    Copy to Clipboard Toggle word wrap

  3. Check that all the pods from LVMS are in the Running state in the openshift-storage namespace by running the following command:

    $ oc get pods -n openshift-storage
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  READY   STATUS    RESTARTS      AGE
    lvms-operator-7b9fb858cb-6nsml        3/3     Running   0             70m
    topolvm-controller-5dd9cf78b5-7wwr2   5/5     Running   0             66m
    topolvm-node-dr26h                    4/4     Running   0             66m
    vg-manager-r6zdv                      1/1     Running   0             66m
    Copy to Clipboard Toggle word wrap

    The expected output is one running instance of lvms-operator and vg-manager. One instance of topolvm-controller and topolvm-node is expected for each node.

    If topolvm-node is stuck in the Init state, there is a failure to locate an available disk for LVMS to use. To retrieve the information necessary to troubleshoot, review the logs of the vg-manager pod by running the following command:

    $ oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage
    Copy to Clipboard Toggle word wrap

5.4.8.4. Recovering from disk failure

If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there might be a problem with the underlying volume or disk. Disk and volume provisioning issues often result with a generic error first, such as Failed to provision volume with StorageClass <storage_class_name>. A second, more specific error message usually follows.

Procedure

  1. Inspect the events associated with a PVC by running the following command:

    $ oc describe pvc <pvc_name> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <pvc_name> with the name of the PVC. Here are some examples of disk or volume failure error messages and their causes:
    • Failed to check volume existence: Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures.
    • Failed to bind volume: Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC.
    • FailedMount or FailedUnMount: This error indicates problems when trying to mount the volume to a node or unmount a volume from a node. If the disk has failed, this error might appear when a pod tries to use the PVC.
    • Volume is already exclusively attached to one node and can’t be attached to another: This error can appear with storage solutions that do not support ReadWriteMany access modes.
  2. Establish a direct connection to the host where the problem is occurring.
  3. Resolve the disk issue.

After you have resolved the issue with the disk, you might need to perform the forced cleanup procedure if failure messages persist or reoccur.

5.4.8.5. Performing a forced cleanup

If disk- or node-related problems persist after you complete the troubleshooting procedures, it might be necessary to perform a forced cleanup procedure. A forced cleanup is used to comprehensively address persistent issues and ensure the proper functioning of the LVMS.

Prerequisites

  1. All of the persistent volume claims (PVCs) created using the logical volume manager storage (LVMS) driver have been removed.
  2. The pods using those PVCs have been stopped.

Procedure

  1. Switch to the openshift-storage namespace by running the following command:

    $ oc project openshift-storage
    Copy to Clipboard Toggle word wrap
  2. Ensure there is no Logical Volume custom resource (CR) remaining by running the following command:

    $ oc get logicalvolume
    Copy to Clipboard Toggle word wrap

    Example output

    No resources found
    Copy to Clipboard Toggle word wrap

    1. If there are any LogicalVolume CRs remaining, remove their finalizers by running the following command:

      $ oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <name> with the name of the CR.
    2. After removing their finalizers, delete the CRs by running the following command:

      $ oc delete logicalvolume <name> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <name> with the name of the CR.
  3. Make sure there are no LVMVolumeGroup CRs left by running the following command:

    $ oc get lvmvolumegroup
    Copy to Clipboard Toggle word wrap

    Example output

    No resources found
    Copy to Clipboard Toggle word wrap

    1. If there are any LVMVolumeGroup CRs left, remove their finalizers by running the following command:

      $ oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <name> with the name of the CR.
    2. After removing their finalizers, delete the CRs by running the following command:

      $ oc delete lvmvolumegroup <name> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <name> with the name of the CR.
  4. Remove any LVMVolumeGroupNodeStatus CRs by running the following command:

    $ oc delete lvmvolumegroupnodestatus --all
    Copy to Clipboard Toggle word wrap
  5. Remove the LVMCluster CR by running the following command:

    $ oc delete lvmcluster --all
    Copy to Clipboard Toggle word wrap
Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat