Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 5. Persistent storage using local storage


5.1. Local storage overview

You can use any of the following solutions to provision local storage:

  • HostPath Provisioner (HPP)
  • Local Storage Operator (LSO)
  • Logical Volume Manager (LVM) Storage
Warning

These solutions support provisioning only node-local storage. The workloads are bound to the nodes that provide the storage. If the node becomes unavailable, the workload also becomes unavailable. To maintain workload availability despite node failures, you must ensure storage data replication through active or passive replication mechanisms.

5.1.1. Overview of HostPath Provisioner functionality

You can perform the following actions using HostPath Provisioner (HPP):

  • Map the host filesystem paths to storage classes for provisioning local storage.
  • Statically create storage classes to configure filesystem paths on a node for storage consumption.
  • Statically provision Persistent Volumes (PVs) based on the storage class.
  • Create workloads and PersistentVolumeClaims (PVCs) while being aware of the underlying storage topology.
Note

HPP is available in upstream Kubernetes. However, it is not recommended to use HPP from upstream Kubernetes.

5.1.2. Overview of Local Storage Operator functionality

You can perform the following actions using Local Storage Operator (LSO):

  • Assign the storage devices (disks or partitions) to the storage classes without modifying the device configuration.
  • Statically provision PVs and storage classes by configuring the LocalVolume custom resource (CR).
  • Create workloads and PVCs while being aware of the underlying storage topology.
Note

LSO is developed and delivered by Red Hat.

5.1.3. Overview of LVM Storage functionality

You can perform the following actions using Logical Volume Manager (LVM) Storage:

  • Configure storage devices (disks or partitions) as lvm2 volume groups and expose the volume groups as storage classes.
  • Create workloads and request storage by using PVCs without considering the node topology.

LVM Storage uses the TopoLVM CSI driver to dynamically allocate storage space to the nodes in the topology and provision PVs.

Note

LVM Storage is developed and maintained by Red Hat. The CSI driver provided with LVM Storage is the upstream project "topolvm".

5.1.4. Comparison of LVM Storage, LSO, and HPP

The following sections compare the functionalities provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage.

5.1.4.1. Comparison of the support for storage types and filesystems

The following table compares the support for storage types and filesystems provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage:

Expand
Table 5.1. Comparison of the support for storage types and filesystems
FunctionalityLVM StorageLSOHPP

Support for block storage

Yes

Yes

No

Support for file storage

Yes

Yes

Yes

Support for object storage [1]

No

No

No

Available filesystems

ext4, xfs

ext4, xfs

Any mounted system available on the node is supported.

  1. None of the solutions (LVM Storage, LSO, and HPP) provide support for object storage. Therefore, if you want to use object storage, you need an S3 object storage solution, such as MultiClusterGateway from the Red Hat OpenShift Data Foundation. All of the solutions can serve as underlying storage providers for the S3 object storage solutions.

5.1.4.2. Comparison of the support for core functionalities

The following table compares how LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) support core functionalities for provisioning local storage:

Expand
Table 5.2. Comparison of the support for core functionalities
FunctionalityLVM StorageLSOHPP

Support for automatic file system formatting

Yes

Yes

N/A

Support for dynamic provisioning

Yes

No

No

Support for using software Redundant Array of Independent Disks (RAID) arrays

Yes

Supported on 4.15 and later.

Yes

Yes

Support for transparent disk encryption

Yes

Supported on 4.16 and later.

Yes

Yes

Support for volume based disk encryption

No

No

No

Support for disconnected installation

Yes

Yes

Yes

Support for PVC expansion

Yes

No

No

Support for volume snapshots and volume clones

Yes

No

No

Support for thin provisioning

Yes

Devices are thin-provisioned by default.

Yes

You can configure the devices to point to the thin-provisioned volumes

Yes

You can configure a path to point to the thin-provisioned volumes.

Support for automatic disk discovery and setup

Yes

Automatic disk discovery is available during installation and runtime. You can also dynamically add the disks to the LVMCluster custom resource (CR) to increase the storage capacity of the existing storage classes.

Technology Preview

Automatic disk discovery is available during installation.

No

5.1.4.3. Comparison of performance and isolation capabilities

The following table compares the performance and isolation capabilities of LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) in provisioning local storage.

Expand
Table 5.3. Comparison of performance and isolation capabilities
FunctionalityLVM StorageLSOHPP

Performance

I/O speed is shared for all workloads that use the same storage class.

Block storage allows direct I/O operations.

Thin provisioning can affect the performance.

I/O depends on the LSO configuration.

Block storage allows direct I/O operations.

I/O speed is shared for all workloads that use the same storage class.

The restrictions imposed by the underlying filesystem can affect the I/O speed.

Isolation boundary [1]

LVM Logical Volume (LV)

It provides higher level of isolation compared to HPP.

LVM Logical Volume (LV)

It provides higher level of isolation compared to HPP

Filesystem path

It provides lower level of isolation compared to LSO and LVM Storage.

  1. Isolation boundary refers to the level of separation between different workloads or applications that use local storage resources.

5.1.4.4. Comparison of the support for additional functionalities

The following table compares the additional features provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage:

Expand
Table 5.4. Comparison of the support for additional functionalities
FunctionalityLVM StorageLSOHPP

Support for generic ephemeral volumes

Yes

No

No

Support for CSI inline ephemeral volumes

No

No

No

Support for storage topology

Yes

Supports CSI node topology

Yes

LSO provides partial support for storage topology through node tolerations.

No

Support for ReadWriteMany (RWX) access mode [1]

No

No

No

  1. All of the solutions (LVM Storage, LSO, and HPP) have the ReadWriteOnce (RWO) access mode. RWO access mode allows access from multiple pods on the same node.

5.2. Persistent storage using local volumes

OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface.

Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.

Note

Local volumes can only be used as a statically created persistent volume.

5.2.1. Installing the Local Storage Operator

The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster.

Prerequisites

  • Access to the OpenShift Container Platform web console or command-line interface (CLI).

Procedure

  1. Create the openshift-local-storage project:

    $ oc adm new-project openshift-local-storage
    Copy to Clipboard Toggle word wrap
  2. Optional: Allow local storage creation on infrastructure nodes.

    You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring.

    You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes.

    To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command:

    $ oc annotate namespace openshift-local-storage openshift.io/node-selector=''
    Copy to Clipboard Toggle word wrap
  3. Optional: Allow local storage to run on the management pool of CPUs in single-node deployment.

    Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the management pool. Perform this step on single-node installations that use management workload partitioning.

    To allow Local Storage Operator to run on the management CPU pool, run following commands:

    $ oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'
    Copy to Clipboard Toggle word wrap

From the UI

To install the Local Storage Operator from the web console, follow these steps:

  1. Log in to the OpenShift Container Platform web console.
  2. Navigate to Operators OperatorHub.
  3. Type Local Storage into the filter box to locate the Local Storage Operator.
  4. Click Install.
  5. On the Install Operator page, select A specific namespace on the cluster. Select openshift-local-storage from the drop-down menu.
  6. Adjust the values for Update Channel and Approval Strategy to the values that you want.
  7. Click Install.

Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console.

From the CLI

  1. Install the Local Storage Operator from the CLI.

    1. Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml:

      Example openshift-local-storage.yaml

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: local-operator-group
        namespace: openshift-local-storage
      spec:
        targetNamespaces:
          - openshift-local-storage
      ---
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: local-storage-operator
        namespace: openshift-local-storage
      spec:
        channel: stable
        installPlanApproval: Automatic 
      1
      
        name: local-storage-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      Copy to Clipboard Toggle word wrap

      1
      The user approval policy for an install plan.
  2. Create the Local Storage Operator object by entering the following command:

    $ oc apply -f openshift-local-storage.yaml
    Copy to Clipboard Toggle word wrap

    At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

  3. Verify local storage installation by checking that all pods and the Local Storage Operator have been created:

    1. Check that all the required pods have been created:

      $ oc -n openshift-local-storage get pods
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                                      READY   STATUS    RESTARTS   AGE
      local-storage-operator-746bf599c9-vlt5t   1/1     Running   0          19m
      Copy to Clipboard Toggle word wrap

    2. Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project:

      $ oc get csvs -n openshift-local-storage
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                                         DISPLAY         VERSION               REPLACES   PHASE
      local-storage-operator.4.2.26-202003230335   Local Storage   4.2.26-202003230335              Succeeded
      Copy to Clipboard Toggle word wrap

After all checks have passed, the Local Storage Operator is installed successfully.

5.2.2. Provisioning local volumes by using the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Prerequisites

  • The Local Storage Operator is installed.
  • You have a local disk that meets the following conditions:

    • It is attached to a node.
    • It is not mounted.
    • It does not contain partitions.

Procedure

  1. Create the local volume resource. This resource must define the nodes and paths to the local volumes.

    Note

    Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs).

    Example: Filesystem

    apiVersion: "local.storage.openshift.io/v1"
    kind: "LocalVolume"
    metadata:
      name: "local-disks"
      namespace: "openshift-local-storage" 
    1
    
    spec:
      nodeSelector: 
    2
    
        nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-140-183
              - ip-10-0-158-139
              - ip-10-0-164-33
      storageClassDevices:
        - storageClassName: "local-sc" 
    3
    
          volumeMode: Filesystem 
    4
    
          fsType: xfs 
    5
    
          devicePaths: 
    6
    
            - /path/to/device 
    7
    Copy to Clipboard Toggle word wrap

    1
    The namespace where the Local Storage Operator is installed.
    2
    Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3
    The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
    4
    The volume mode, either Filesystem or Block, that defines the type of local volumes.
    Note

    A raw block volume (volumeMode: Block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    5
    The file system that is created when the local volume is mounted for the first time.
    6
    The path containing a list of local storage devices to choose from.
    7
    Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as /dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
    Note

    If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

    Example: Block

    apiVersion: "local.storage.openshift.io/v1"
    kind: "LocalVolume"
    metadata:
      name: "local-disks"
      namespace: "openshift-local-storage" 
    1
    
    spec:
      nodeSelector: 
    2
    
        nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-136-143
              - ip-10-0-140-255
              - ip-10-0-144-180
      storageClassDevices:
        - storageClassName: "local-sc" 
    3
    
          volumeMode: Block 
    4
    
          devicePaths: 
    5
    
            - /path/to/device 
    6
    Copy to Clipboard Toggle word wrap

    1
    The namespace where the Local Storage Operator is installed.
    2
    Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3
    The name of the storage class to use when creating persistent volume objects.
    4
    The volume mode, either Filesystem or Block, that defines the type of local volumes.
    5
    The path containing a list of local storage devices to choose from.
    6
    Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
    Note

    If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

  2. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created:

    $ oc create -f <local-volume>.yaml
    Copy to Clipboard Toggle word wrap
  3. Verify that the provisioner was created and that the corresponding daemon sets were created:

    $ oc get all -n openshift-local-storage
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                          READY   STATUS    RESTARTS   AGE
    pod/diskmaker-manager-9wzms                   1/1     Running   0          5m43s
    pod/diskmaker-manager-jgvjp                   1/1     Running   0          5m43s
    pod/diskmaker-manager-tbdsj                   1/1     Running   0          5m43s
    pod/local-storage-operator-7db4bd9f79-t6k87   1/1     Running   0          14m
    
    NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
    service/local-storage-operator-metrics   ClusterIP   172.30.135.36   <none>        8383/TCP,8686/TCP   14m
    
    NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/diskmaker-manager   3         3         3       3            3           <none>          5m43s
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/local-storage-operator   1/1     1            1           14m
    
    NAME                                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/local-storage-operator-7db4bd9f79   1         1         1       14m
    Copy to Clipboard Toggle word wrap

    Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid.

  4. Verify that the persistent volumes were created:

    $ oc get pv
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    local-pv-1cec77cf   100Gi      RWO            Delete           Available           local-sc                88m
    local-pv-2ef7cd2a   100Gi      RWO            Delete           Available           local-sc                82m
    local-pv-3fa1c73    100Gi      RWO            Delete           Available           local-sc                48m
    Copy to Clipboard Toggle word wrap

Important

Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation.

5.2.3. Provisioning local volumes without the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Important

Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs.

Prerequisites

  • Local disks are attached to the OpenShift Container Platform nodes.

Procedure

  1. Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml, with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes.

    Note

    Do not use different storage class names for the same device. Doing so will create multiple PVs.

    example-pv-filesystem.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-pv-filesystem
    spec:
      capacity:
        storage: 100Gi
      volumeMode: Filesystem 
    1
    
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-sc 
    2
    
      local:
        path: /dev/xvdf 
    3
    
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - example-node
    Copy to Clipboard Toggle word wrap

    1
    The volume mode, either Filesystem or Block, that defines the type of PVs.
    2
    The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs.
    3
    The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode.
    Note

    A raw block volume (volumeMode: block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    example-pv-block.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-pv-block
    spec:
      capacity:
        storage: 100Gi
      volumeMode: Block 
    1
    
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-sc 
    2
    
      local:
        path: /dev/xvdf 
    3
    
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - example-node
    Copy to Clipboard Toggle word wrap

    1
    The volume mode, either Filesystem or Block, that defines the type of PVs.
    2
    The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs.
    3
    The path containing a list of local storage devices to choose from.
  2. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created:

    $ oc create -f <example-pv>.yaml
    Copy to Clipboard Toggle word wrap
  3. Verify that the local PV was created:

    $ oc get pv
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                STORAGECLASS    REASON   AGE
    example-pv-filesystem   100Gi      RWO            Delete           Available                        local-sc            3m47s
    example-pv1             1Gi        RWO            Delete           Bound       local-storage/pvc1   local-sc            12h
    example-pv2             1Gi        RWO            Delete           Bound       local-storage/pvc2   local-sc            12h
    example-pv3             1Gi        RWO            Delete           Bound       local-storage/pvc3   local-sc            12h
    Copy to Clipboard Toggle word wrap

5.2.4. Creating the local volume persistent volume claim

Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod.

Prerequisites

  • Persistent volumes have been created using the local volume provisioner.

Procedure

  1. Create the PVC using the corresponding storage class:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: local-pvc-name 
    1
    
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem 
    2
    
      resources:
        requests:
          storage: 100Gi 
    3
    
      storageClassName: local-sc 
    4
    Copy to Clipboard Toggle word wrap
    1
    Name of the PVC.
    2
    The type of the PVC. Defaults to Filesystem.
    3
    The amount of storage available to the PVC.
    4
    Name of the storage class required by the claim.
  2. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created:

    $ oc create -f <local-pvc>.yaml
    Copy to Clipboard Toggle word wrap

5.2.5. Attach the local claim

After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource.

Prerequisites

  • A persistent volume claim exists in the same namespace.

Procedure

  1. Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod:

    apiVersion: v1
    kind: Pod
    spec:
    # ...
      containers:
        volumeMounts:
        - name: local-disks 
    1
    
          mountPath: /data 
    2
    
      volumes:
      - name: local-disks
        persistentVolumeClaim:
          claimName: local-pvc-name 
    3
    
    # ...
    Copy to Clipboard Toggle word wrap
    1
    The name of the volume to mount.
    2
    The path inside the pod where the volume is mounted. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    3
    The name of the existing persistent volume claim to use.
  2. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created:

    $ oc create -f <local-pod>.yaml
    Copy to Clipboard Toggle word wrap

5.2.6. Automating discovery and provisioning for local storage devices

The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices.

Important

Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Important

Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment.

Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices.

Warning

Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet that target a node more than once is not supported.

Prerequisites

  • You have cluster administrator permissions.
  • You have installed the Local Storage Operator.
  • You have attached local disks to OpenShift Container Platform nodes.
  • You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI).

Procedure

  1. To enable automatic discovery of local devices from the web console:

    1. Click Operators Installed Operators.
    2. In the openshift-local-storage namespace, click Local Storage.
    3. Click the Local Volume Discovery tab.
    4. Click Create Local Volume Discovery and then select either Form view or YAML view.
    5. Configure the LocalVolumeDiscovery object parameters.
    6. Click Create.

      The Local Storage Operator creates a local volume discovery instance named auto-discover-devices.

  2. To display a continuous list of available devices on a node:

    1. Log in to the OpenShift Container Platform web console.
    2. Navigate to Compute Nodes.
    3. Click the node name that you want to open. The "Node Details" page is displayed.
    4. Select the Disks tab to display the list of the selected devices.

      The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode.

  3. To automatically provision local volumes for the discovered devices from the web console:

    1. Navigate to Operators Installed Operators and select Local Storage from the list of Operators.
    2. Select Local Volume Set Create Local Volume Set.
    3. Enter a volume set name and a storage class name.
    4. Choose All nodes or Select nodes to apply filters accordingly.

      Note

      Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes.

    5. Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create.

      A message displays after several minutes, indicating that the "Operator reconciled successfully."

  4. Alternatively, to provision local volumes for the discovered devices from the CLI:

    1. Create an object YAML file to define the local volume set, such as local-volume-set.yaml, as shown in the following example:

      apiVersion: local.storage.openshift.io/v1alpha1
      kind: LocalVolumeSet
      metadata:
        name: example-autodetect
      spec:
        nodeSelector:
          nodeSelectorTerms:
            - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                    - worker-0
                    - worker-1
        storageClassName: local-sc 
      1
      
        volumeMode: Filesystem
        fsType: ext4
        maxDeviceCount: 10
        deviceInclusionSpec:
          deviceTypes: 
      2
      
            - disk
            - part
          deviceMechanicalProperties:
            - NonRotational
          minSize: 10G
          maxSize: 100G
          models:
            - SAMSUNG
            - Crucial_CT525MX3
          vendors:
            - ATA
            - ST2000LM
      Copy to Clipboard Toggle word wrap
      1
      Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
      2
      When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices.
    2. Create the local volume set object:

      $ oc apply -f local-volume-set.yaml
      Copy to Clipboard Toggle word wrap
    3. Verify that the local persistent volumes were dynamically provisioned based on the storage class:

      $ oc get pv
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
      local-pv-1cec77cf   100Gi      RWO            Delete           Available           local-sc                88m
      local-pv-2ef7cd2a   100Gi      RWO            Delete           Available           local-sc                82m
      local-pv-3fa1c73    100Gi      RWO            Delete           Available           local-sc                48m
      Copy to Clipboard Toggle word wrap

Note

Results are deleted after they are removed from the node. Symlinks must be manually removed.

5.2.7. Using tolerations with Local Storage Operator pods

Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes.

You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node.

Important

Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect. An operator allows you to leave one of these parameters empty.

Prerequisites

  • The Local Storage Operator is installed.
  • Local disks are attached to OpenShift Container Platform nodes with a taint.
  • Tainted nodes are expected to provision local storage.

Procedure

To configure local volumes for scheduling on tainted nodes:

  1. Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example:

      apiVersion: "local.storage.openshift.io/v1"
      kind: "LocalVolume"
      metadata:
        name: "local-disks"
        namespace: "openshift-local-storage"
      spec:
        tolerations:
          - key: localstorage 
    1
    
            operator: Equal 
    2
    
            value: "localstorage" 
    3
    
        storageClassDevices:
            - storageClassName: "local-sc"
              volumeMode: Block 
    4
    
              devicePaths: 
    5
    
                - /dev/xvdg
    Copy to Clipboard Toggle word wrap
    1
    Specify the key that you added to the node.
    2
    Specify the Equal operator to require the key/value parameters to match. If operator is Exists, the system checks that the key exists and ignores the value. If operator is Equal, then the key and value must match.
    3
    Specify the value local of the tainted node.
    4
    The volume mode, either Filesystem or Block, defining the type of the local volumes.
    5
    The path containing a list of local storage devices to choose from.
  2. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example:

    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
    Copy to Clipboard Toggle word wrap

The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints.

5.2.8. Local Storage Operator Metrics

OpenShift Container Platform provides the following metrics for the Local Storage Operator:

  • lso_discovery_disk_count: total number of discovered devices on each node
  • lso_lvset_provisioned_PV_count: total number of PVs created by LocalVolumeSet objects
  • lso_lvset_unmatched_disk_count: total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria
  • lso_lvset_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolumeSet object criteria
  • lso_lv_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolume object criteria
  • lso_lv_provisioned_PV_count: total number of provisioned PVs for LocalVolume

To use these metrics, enable them by doing one of the following:

  • When installing the Local Storage Operator from OperatorHub in the web console, select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
  • Manually add the openshift.io/cluster-monitoring=true label to the Operator namespace by running the following command:

    $ oc label ns/openshift-local-storage openshift.io/cluster-monitoring=true
    Copy to Clipboard Toggle word wrap

For more information about metrics, see Accessing metrics as an administrator.

5.2.9. Deleting the Local Storage Operator resources

5.2.9.1. Removing a local volume or local volume set

Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed.

Note

The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource.

Prerequisites

  • The persistent volume must be in a Released or Available state.

    Warning

    Deleting a persistent volume that is still in use can result in data loss or corruption.

Procedure

  1. Edit the previously created local volume to remove any unwanted disks.

    1. Edit the cluster resource:

      $ oc edit localvolume <local_volume_name> -n openshift-local-storage
      Copy to Clipboard Toggle word wrap
    2. Navigate to the lines under devicePaths, and delete any representing unwanted disks.
  2. Delete any persistent volumes created.

    $ oc delete pv <pv_name>
    Copy to Clipboard Toggle word wrap
  3. Delete directory and included symlinks on the node.

    Warning

    The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability.

    $ oc debug node/<node_name> -- chroot /host rm -rf /mnt/local-storage/<sc_name> 
    1
    Copy to Clipboard Toggle word wrap
    1
    The name of the storage class used to create the local volumes.

5.2.9.2. Uninstalling the Local Storage Operator

To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project.

Warning

Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator’s removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources.

Prerequisites

  • Access to the OpenShift Container Platform web console.

Procedure

  1. Delete any local volume resources installed in the project, such as localvolume, localvolumeset, and localvolumediscovery by running the following commands:

    $ oc delete localvolume --all --all-namespaces
    Copy to Clipboard Toggle word wrap
    $ oc delete localvolumeset --all --all-namespaces
    Copy to Clipboard Toggle word wrap
    $ oc delete localvolumediscovery --all --all-namespaces
    Copy to Clipboard Toggle word wrap
  2. Uninstall the Local Storage Operator from the web console.

    1. Log in to the OpenShift Container Platform web console.
    2. Navigate to Operators Installed Operators.
    3. Type Local Storage into the filter box to locate the Local Storage Operator.
    4. Click the Options menu kebab at the end of the Local Storage Operator.
    5. Click Uninstall Operator.
    6. Click Remove in the window that appears.
  3. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command:

    $ oc delete pv <pv-name>
    Copy to Clipboard Toggle word wrap
  4. Delete the openshift-local-storage project by running the following command:

    $ oc delete project openshift-local-storage
    Copy to Clipboard Toggle word wrap

5.3. Persistent storage using hostPath

A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node’s filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.

Important

The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node.

5.3.1. Overview

OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster.

In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning.

A hostPath volume must be provisioned statically.

Important

Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host. The following example shows the / directory from the host being mounted into the container at /host.

apiVersion: v1
kind: Pod
metadata:
  name: test-host-mount
spec:
  containers:
  - image: registry.access.redhat.com/ubi9/ubi
    name: test-container
    command: ['sh', '-c', 'sleep 3600']
    volumeMounts:
    - mountPath: /host
      name: host-slash
  volumes:
   - name: host-slash
     hostPath:
       path: /
       type: ''
Copy to Clipboard Toggle word wrap

5.3.2. Statically provisioning hostPath volumes

A pod that uses a hostPath volume must be referenced by manual (static) provisioning.

Procedure

  1. Define the persistent volume (PV) by creating a pv.yaml file with the PersistentVolume object definition:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: task-pv-volume 
    1
    
      labels:
        type: local
    spec:
      storageClassName: manual 
    2
    
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce 
    3
    
      persistentVolumeReclaimPolicy: Retain
      hostPath:
        path: "/mnt/data" 
    4
    Copy to Clipboard Toggle word wrap
    1
    The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods.
    2
    Used to bind persistent volume claim (PVC) requests to the PV.
    3
    The volume can be mounted as read-write by a single node.
    4
    The configuration file specifies that the volume is at /mnt/data on the cluster’s node. To avoid corrupting your host system, do not mount to the container root, /, or any path that is the same in the host and the container. You can safely mount the host by using /host
  2. Create the PV from the file:

    $ oc create -f pv.yaml
    Copy to Clipboard Toggle word wrap
  3. Define the PVC by creating a pvc.yaml file with the PersistentVolumeClaim object definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: task-pvc-volume
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: manual
    Copy to Clipboard Toggle word wrap
  4. Create the PVC from the file:

    $ oc create -f pvc.yaml
    Copy to Clipboard Toggle word wrap

5.3.3. Mounting the hostPath share in a privileged pod

After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.

Prerequisites

  • A persistent volume claim exists that is mapped to the underlying hostPath share.

Procedure

  • Create a privileged pod that mounts the existing persistent volume claim:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-name 
    1
    
    spec:
      containers:
        ...
        securityContext:
          privileged: true 
    2
    
        volumeMounts:
        - mountPath: /data 
    3
    
          name: hostpath-privileged
      ...
      securityContext: {}
      volumes:
        - name: hostpath-privileged
          persistentVolumeClaim:
            claimName: task-pvc-volume 
    4
    Copy to Clipboard Toggle word wrap
    1
    The name of the pod.
    2
    The pod must run as privileged to access the node’s storage.
    3
    The path to mount the host path share inside the privileged pod. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    4
    The name of the PersistentVolumeClaim object that has been previously created.

5.4. Persistent storage using Logical Volume Manager Storage

Logical Volume Manager (LVM) Storage uses Logical Volume Manager (LVM2) through the TopoLVM Container Storage Interface (CSI) driver to dynamically provision local storage on a cluster with limited resources.

You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.

5.4.1. Logical Volume Manager Storage installation

You can install Logical Volume Manager (LVM) Storage on a single-node OpenShift cluster and configure it to dynamically provision storage for your workloads.

You can deploy LVM Storage on single-node OpenShift clusters by using the OpenShift Container Platform CLI (oc), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM).

5.4.1.1. Prerequisites to install LVM Storage

The prerequisites to install LVM Storage are as follows:

  • Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.
  • Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
  • Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.

    Note

    You cannot wipe the disks that are in use.

  • If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. For more information, see "Installing LVM Storage by using RHACM".

5.4.1.2. Installing LVM Storage by using the CLI

As a cluster administrator, you can install Logical Volume Manager (LVM) Storage by using the OpenShift CLI (oc).

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to OpenShift Container Platform as a user with cluster-admin and Operator installation permissions.

Procedure

  1. Create a YAML file and add the configuration for creating a namespace.

    Example YAML configuration for creating a namespace

    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        openshift.io/cluster-monitoring: "true"
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/audit: privileged
        pod-security.kubernetes.io/warn: privileged
      name: openshift-storage
    Copy to Clipboard Toggle word wrap

  2. Create the namespace by running the following command:

    $ oc create -f <file_name>
    Copy to Clipboard Toggle word wrap
  3. Create an OperatorGroup custom resource (CR) YAML file.

    Example OperatorGroup CR

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-storage-operatorgroup
      namespace: openshift-storage
    spec:
      targetNamespaces:
      - openshift-storage
    Copy to Clipboard Toggle word wrap

  4. Create the OperatorGroup CR by running the following command:

    $ oc create -f <file_name>
    Copy to Clipboard Toggle word wrap
  5. Create a Subscription CR YAML file.

    Example Subscription CR

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: lvms
      namespace: openshift-storage
    spec:
      installPlanApproval: Automatic
      name: lvms-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    Copy to Clipboard Toggle word wrap

  6. Create the Subscription CR by running the following command:

    $ oc create -f <file_name>
    Copy to Clipboard Toggle word wrap

Verification

  1. To verify that LVM Storage is installed, run the following command:

    $ oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase
    Copy to Clipboard Toggle word wrap

    Example output

    Name                         Phase
    4.13.0-202301261535          Succeeded
    Copy to Clipboard Toggle word wrap

5.4.1.3. Installing LVM Storage by using the web console

You can install Logical Volume Manager (LVM) Storage by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the single-node OpenShift cluster.
  • You have access to OpenShift Container Platform with cluster-admin and Operator installation permissions.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators OperatorHub.
  3. Click LVM Storage on the OperatorHub page.
  4. Set the following options on the Operator Installation page:

    1. Update Channel as stable-4.14.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If the openshift-storage namespace does not exist, it is created during the operator installation.
    4. Update approval as Automatic or Manual.

      Note

      If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically updates the running instance of LVM Storage without any intervention.

      If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must manually approve the update request to update LVM Storage to a newer version.

  5. Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
  6. Click Install.

Verification steps

  • Verify that LVM Storage shows a green tick, indicating successful installation.

5.4.1.4. Installing LVM Storage in a disconnected environment

You can install Logical Volume Manager (LVM) Storage on OpenShift Container Platform 4.14 in a disconnected environment. All sections referenced in this procedure are linked in the "Additional resources" section.

Prerequisites

  • You read the "About disconnected installation mirroring" section.
  • You have access to the OpenShift Container Platform image repository.
  • You created a mirror registry.

Procedure

  1. Follow the steps in the "Creating the image set configuration" procedure. To create an image set configuration for LVM Storage, you can use the following example ImageSetConfiguration object configuration:

    Example ImageSetConfiguration file for LVM Storage

    kind: ImageSetConfiguration
    apiVersion: mirror.openshift.io/v1alpha2
    archiveSize: 4 
    1
    
    storageConfig: 
    2
    
      registry:
        imageURL: example.com/mirror/oc-mirror-metadata 
    3
    
        skipTLS: false
    mirror:
      platform:
        channels:
        - name: stable-4.14 
    4
    
          type: ocp
        graph: true 
    5
    
      operators:
      - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 
    6
    
        packages:
        - name: lvms-operator 
    7
    
          channels:
          - name: stable 
    8
    
      additionalImages:
      - name: registry.redhat.io/ubi9/ubi:latest 
    9
    
      helm: {}
    Copy to Clipboard Toggle word wrap

    1
    Set the maximum size (in gibibytes) of each file within the image set.
    2
    Specify the location in which you want to save the image set. This location can be a registry or a local directory.
    3
    Specify the storage URL for the image stream when using a registry. For more information, see "Why use imagestreams".
    4
    Specify the channel from which you want to retrieve the OpenShift Container Platform images.
    5
    Set this field to true to generate the OpenShift Update Service (OSUS) graph image. For more information, see "About the OpenShift Update Service".
    6
    Specify the Operator catalog from which you want to retrieve the OpenShift Container Platform images.
    7
    Specify the Operator packages to include in the image set. If this field is empty, all packages in the catalog are retrieved.
    8
    Specify the channels of the Operator packages to include in the image set. You must include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: $ oc mirror list operators --catalog=<catalog_name> --package=<package_name>.
    9
    Specify any additional images to include in the image set.
  2. Follow the procedure in the "Mirroring an image set to a mirror registry" section.
  3. Follow the procedure in the "Configuring image registry repository mirroring" section.

5.4.1.5. Installing LVM Storage by using RHACM

To install Logical Volume Manager (LVM) Storage on the clusters by using Red Hat Advanced Cluster Management (RHACM), you must create a Policy custom resource (CR). You can also configure the criteria to select the clusters on which you want to install LVM Storage.

Note

The Policy CR that is created to install LVM Storage is also applied to the clusters that are imported or created after creating the Policy CR.

Prerequisites

  • You have access to the RHACM cluster using an account with cluster-admin and Operator installation permissions.
  • You have dedicated disks that LVM Storage can use on each cluster.
  • The cluster must be managed by RHACM.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Create a namespace by running the following command:

    $ oc create ns <namespace>
    Copy to Clipboard Toggle word wrap
  3. Create a Policy CR YAML file.

    Example Policy CR to install and configure LVM Storage

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-install-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector: 
    1
    
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-install-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-install-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: install-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: install-lvms
    spec:
      disabled: false
      remediationAction: enforce
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: install-lvms
          spec:
            object-templates:
            - complianceType: musthave
              objectDefinition: 
    2
    
                apiVersion: v1
                kind: Namespace
                metadata:
                  labels:
                    openshift.io/cluster-monitoring: "true"
                    pod-security.kubernetes.io/enforce: privileged
                    pod-security.kubernetes.io/audit: privileged
                    pod-security.kubernetes.io/warn: privileged
                  name: openshift-storage
            - complianceType: musthave
              objectDefinition: 
    3
    
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: musthave
              objectDefinition: 
    4
    
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms
                  namespace: openshift-storage
                spec:
                  installPlanApproval: Automatic
                  name: lvms-operator
                  source: redhat-operators
                  sourceNamespace: openshift-marketplace
            remediationAction: enforce
            severity: low
    Copy to Clipboard Toggle word wrap

    1
    Set the key field and values field in PlacementRule.spec.clusterSelector to match the labels that are configured in the clusters on which you want to install LVM Storage.
    2
    The namespace configuration.
    3
    The OperatorGroup CR configuration.
    4
    The Subscription CR configuration.
  4. Create the Policy CR by running the following command:

    $ oc create -f <file_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

    Upon creating the Policy CR, the following custom resources are created on the clusters that match the selection criteria configured in the PlacementRule CR:

    • Namespace
    • OperatorGroup
    • Subscription

5.4.2. About the LVMCluster custom resource

You can configure the LVMCluster custom resource (CR) to perform the following actions:

  • Create LVM volume groups that you can use to provision persistent volume claims (PVCs).
  • Configure a list of devices that you want to add to the LVM volume groups.
  • Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group.

After you have installed LVM Storage, you must create an LVMCluster custom resource (CR).

Example LVMCluster CR YAML file

apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
  name: my-lvmcluster
spec:
  tolerations:
  - effect: NoSchedule
    key: xyz
    operator: Equal
    value: "true"
  storage:
    deviceClasses:
    - name: vg1
      fstype: ext4 
1

      default: true
      nodeSelector: 
2

        nodeSelectorTerms:
        - matchExpressions:
          - key: mykey
            operator: In
            values:
            - ssd
      deviceSelector: 
3

        paths:
        - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
        - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
        optionalPaths:
        - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
        - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
      thinPoolConfig:
        name: thin-pool-1
        sizePercent: 90 
4

        overprovisionRatio: 10
Copy to Clipboard Toggle word wrap

1 2 3 4
Optional field

5.4.2.1. Explanation of fields in the LVMCluster CR

The LVMCluster CR fields are described in the following table:

Expand
Table 5.5. LVMCluster CR fields
FieldTypeDescription

spec.storage.deviceClasses

array

Contains the configuration to assign the local storage devices to the LVM volume groups.

LVM Storage creates a storage class and volume snapshot class for each device class that you create.

If you add or remove a device class, the update reflects in the cluster only after deleting and recreating the topolvm-node pod.

deviceClasses.name

string

Specify a name for the LVM volume group (VG).

deviceClasses.fstype

string

Set this field to ext4 or xfs. By default, this field is set to xfs.

deviceClasses.default

boolean

Set this field to true to indicate that a device class is the default. Otherwise, you can set it to false. You can only configure a single default device class.

deviceClasses.nodeSelector

object

Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.

On the control-plane node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster.

nodeSelector.nodeSelectorTerms

array

Configure the requirements that are used to select the node.

deviceClasses.deviceSelector

object

Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.

For more information, see "About adding devices to a volume group".

deviceSelector.paths

array

Specify the device paths.

If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state.

deviceSelector.optionalPaths

array

Specify the optional device paths.

If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error.

deviceClasses.thinPoolConfig

object

Contains the configuration to create a thin pool in the LVM volume group.

thinPoolConfig.name

string

Specify a name for the thin pool.

thinPoolConfig.sizePercent

integer

Specify the percentage of space in the LVM volume group for creating the thin pool.

By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90.

thinPoolConfig.overprovisionRatio

integer

Specify a factor by which you can provision additional storage based on the available storage in the thin pool.

For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool.

To disable over-provisioning, set this field to 1.

5.4.2.2. About adding devices to a volume group

The deviceSelector field in the LVMCluster custom resource (CR) contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.

You can specify the device paths in the deviceSelector.paths field, the deviceSelector.optionalPaths field, or both. If you do not specify the device paths in both the deviceSelector.paths field and the deviceSelector.optionalPaths field, LVM Storage adds the unused devices to the LVM volume group.

Warning

It is recommended to avoid referencing disks using symbolic naming, such as /dev/sdX, as these names may change across reboots within RHCOS. Instead, you must use stable naming schemes, such as /dev/disk/by-path/ or /dev/disk/by-id/, to ensure consistent disk identification.

With this change, you might need to adjust existing automation workflows in the cases where monitoring collects information about the install device for each node.

For more information, see the RHEL documentation.

If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available.

LVM Storage adds the devices to the LVM volume group only if the device path exists.

Important

After a device is added to the LVM volume group, it cannot be removed.

5.4.3. Ways to create an LVMCluster custom resource

You can create an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also create an LVMCluster CR by using RHACM.

Upon creating the LVMCluster CR, LVM Storage creates the following system-managed CRs:

  • A storageClass and volumeSnapshotClass for each device class.

    Note

    LVM Storage configures the name of the storage class and volume snapshot class in the format lvms-<device_class_name>, where, <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1, the name of the storage class and volume snapshot class is lvms-vg1.

  • LVMVolumeGroup: This CR is a specific type of persistent volume (PV) that is backed by an LVM volume group. It tracks the individual volume groups across multiple nodes.
  • LVMVolumeGroupNodeStatus: This CR tracks the status of the volume groups on a node.

5.4.3.1. Creating an LVMCluster CR by using the CLI

You can create an LVMCluster custom resource (CR) on a worker node using the OpenShift CLI (oc).

Important

You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to OpenShift Container Platform as a user with cluster-admin privileges.
  • You have installed LVM Storage.
  • You have installed a worker node in the cluster.
  • You read the "About the LVMCluster custom resource" section.

Procedure

  1. Create an LVMCluster custom resource (CR) YAML file:

    Example LVMCluster CR YAML file

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
      namespace: openshift-storage
    spec:
    # ...
      storage:
        deviceClasses: 
    1
    
    # ...
          nodeSelector: 
    2
    
    # ...
          deviceSelector: 
    3
    
    # ...
          thinPoolConfig: 
    4
    
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Contains the configuration to assign the local storage devices to the LVM volume groups.
    2
    Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
    3
    Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
    4
    Contains the configuration to create a thin pool in the LVM volume group.
  2. Create the LVMCluster CR by running the following command:

    $ oc create -f <file_name>
    Copy to Clipboard Toggle word wrap

    Example output

    lvmcluster/lvmcluster created
    Copy to Clipboard Toggle word wrap

Verification

  1. Check that the LVMCluster CR is in the Ready state:

    $ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status.state}' -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    {"deviceClassStatuses": 
    1
    
    [
      {
        "name": "vg1",
        "nodeStatus": [ 
    2
    
            {
                "devices": [ 
    3
    
                    "/dev/nvme0n1",
                    "/dev/nvme1n1",
                    "/dev/nvme2n1"
                ],
                "node": "kube-node", 
    4
    
                "status": "Ready" 
    5
    
            }
        ]
      }
    ]
    "state":"Ready"} 
    6
    Copy to Clipboard Toggle word wrap

    1
    The status of the device class.
    2
    The status of the LVM volume group on each node.
    3
    The list of devices used to create the LVM volume group.
    4
    The node on which the device class is created.
    5
    The status of the LVM volume group on the node.
    6
    The status of the LVMCluster CR.
    Note

    If the LVMCluster CR is in the Failed state, you can view the reason for failure in the status field.

    Example status field with the reason for failure:

    status:
      deviceClassStatuses:
        - name: vg1
          nodeStatus:
            - node: my-node-1.example.com
              reason: no available devices found for volume group
              status: Failed
      state: Failed
    Copy to Clipboard Toggle word wrap
  2. Optional: To view the storage classes created by LVM Storage for each device class, run the following command:

    $ oc get storageclass
    Copy to Clipboard Toggle word wrap

    Example output

    NAME          PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    lvms-vg1      topolvm.io           Delete          WaitForFirstConsumer   true                   31m
    Copy to Clipboard Toggle word wrap

  3. Optional: To view the volume snapshot classes created by LVM Storage for each device class, run the following command:

    $ oc get volumesnapshotclass
    Copy to Clipboard Toggle word wrap

    Example output

    NAME          DRIVER               DELETIONPOLICY   AGE
    lvms-vg1      topolvm.io           Delete           24h
    Copy to Clipboard Toggle word wrap

5.4.3.2. Creating an LVMCluster CR by using the web console

You can create an LVMCluster CR on a worker node using the OpenShift Container Platform web console.

Important

You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.

Prerequisites

  • You have access to the OpenShift Container Platform cluster with cluster-admin privileges.
  • You have installed LVM Storage.
  • You have installed a worker node in the cluster.
  • You read the "About the LVMCluster custom resource" section.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. In the openshift-storage namespace, click LVM Storage.
  4. Click Create LVMCluster and select either Form view or YAML view.
  5. Configure the required LVMCluster CR parameters.
  6. Click Create.
  7. Optional: If you want to edit the LVMCLuster CR, perform the following actions:

    1. Click the LVMCluster tab.
    2. From the Actions menu, select Edit LVMCluster.
    3. Click YAML and edit the required LVMCLuster CR parameters.
    4. Click Save.

Verification

  1. On the LVMCLuster page, check that the LVMCluster CR is in the Ready state.
  2. Optional: To view the available storage classes created by LVM Storage for each device class, click Storage StorageClasses.
  3. Optional: To view the available volume snapshot classes created by LVM Storage for each device class, click Storage VolumeSnapshotClasses.

5.4.3.3. Creating an LVMCluster CR by using RHACM

After you have installed Logical Volume Manager (LVM) Storage by using RHACM, you must create an LVMCluster custom resource (CR).

Prerequisites

  • You have installed LVM Storage by using RHACM.
  • You have access to the RHACM cluster using an account with cluster-admin permissions.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Create a ConfigurationPolicy CR YAML file with the configuration to create an LVMCluster CR.

    Example ConfigurationPolicy CR YAML file to create an LVMCluster CR

    apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: lvms
      namespace: openshift-storage
    spec:
      object-templates:
      - complianceType: musthave
        objectDefinition:
          apiVersion: lvm.topolvm.io/v1alpha1
          kind: LVMCluster
          metadata:
            name: my-lvmcluster
            namespace: openshift-storage
          spec:
            storage:
              deviceClasses: 
    1
    
    # ...
                deviceSelector: 
    2
    
    # ...
                thinPoolConfig: 
    3
    
    # ...
                nodeSelector: 
    4
    
    # ...
      remediationAction: enforce
      severity: low
    Copy to Clipboard Toggle word wrap

    1
    Contains the configuration to assign the local storage devices to the LVM volume groups.
    2
    Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
    3
    Contains the configuration to create a thin pool in the LVM volume group.
    4
    Contains the configuration to choose the nodes on which you want to create the LVM volume groups. If this field is empty, then all nodes without no-schedule taints are considered.
  3. Create the ConfigurationPolicy CR by running the following command:

    $ oc create -f <file_name> -n <cluster_namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.

5.4.4. Ways to delete an LVMCluster custom resource

You can delete an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also delete an LVMCluster CR by using RHACM.

Upon deleting the LVMCluster CR, LVM Storage deletes the following CRs:

  • storageClass
  • volumeSnapshotClass
  • LVMVolumeGroup
  • LVMVolumeGroupNodeStatus

5.4.4.1. Deleting an LVMCluster CR by using the CLI

You can delete the LVMCluster custom resource (CR) using the OpenShift CLI (oc).

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the LVMCluster CR by running the following command:

    $ oc delete lvmcluster <lvm_cluster_name> -n openshift-storage
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the LVMCluster CR has been deleted, run the following command:

    $ oc get lvmcluster -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    No resources found in openshift-storage namespace.
    Copy to Clipboard Toggle word wrap

5.4.4.2. Deleting an LVMCluster CR by using the web console

You can delete the LVMCluster custom resource (CR) using the OpenShift Container Platform web console.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators to view all the installed Operators.
  3. Click LVM Storage in the openshift-storage namespace.
  4. Click the LVMCluster tab.
  5. From the Actions, select Delete LVMCluster.
  6. Click Delete.

Verification

  • On the LVMCLuster page, check that the LVMCluster CR has been deleted.

5.4.4.3. Deleting an LVMCluster CR by using RHACM

If you have installed Logical Volume Manager (LVM) Storage by using Red Hat Advanced Cluster Management (RHACM), you can delete an LVMCluster custom resource (CR) by using RHACM.

Prerequisites

  • You have access to the RHACM cluster as a user with cluster-admin permissions.
  • You have deleted the following resources provisioned by LVM Storage:

    • Persistent volume claims (PVCs)
    • Volume snapshots
    • Volume clones

      You have also deleted any applications that are using these resources.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Delete the ConfigurationPolicy CR for the LVMCluster CR by running the following command:

    $ oc delete -f <file_name> -n <cluster_namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.
  3. Create a Policy CR YAML file to delete the LVMCluster CR.

    Example Policy CR to delete the LVMCluster CR

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-lvmcluster-delete
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    spec:
      remediationAction: enforce
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-lvmcluster-removal
            spec:
              remediationAction: enforce 
    1
    
              severity: low
              object-templates:
                - complianceType: mustnothave
                  objectDefinition:
                    kind: LVMCluster
                    apiVersion: lvm.topolvm.io/v1alpha1
                    metadata:
                      name: my-lvmcluster
                      namespace: openshift-storage 
    2
    
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-lvmcluster-delete
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-policy-lvmcluster-delete
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: policy-lvmcluster-delete
    ---
    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-policy-lvmcluster-delete
    spec:
      clusterConditions:
        - status: "True"
          type: ManagedClusterConditionAvailable
      clusterSelector: 
    3
    
        matchExpressions:
          - key: mykey
            operator: In
            values:
              - myvalue
    Copy to Clipboard Toggle word wrap

    1
    The spec.remediationAction in policy-template is overridden by the preceding parameter value for spec.remediationAction.
    2
    This namespace field must have the openshift-storage value.
    3
    Configure the requirements to select the clusters. LVM Storage is uninstalled on the clusters that match the selection criteria.
  4. Create the Policy CR by running the following command:

    $ oc create -f <file_name> -n <namespace>
    Copy to Clipboard Toggle word wrap
  5. Create a Policy CR YAML file to check if the LVMCluster CR has been deleted.

    Example Policy CR to check if the LVMCluster CR has been deleted

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-lvmcluster-inform
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    spec:
      remediationAction: inform
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-lvmcluster-removal-inform
            spec:
              remediationAction: inform 
    1
    
              severity: low
              object-templates:
                - complianceType: mustnothave
                  objectDefinition:
                    kind: LVMCluster
                    apiVersion: lvm.topolvm.io/v1alpha1
                    metadata:
                      name: my-lvmcluster
                      namespace: openshift-storage 
    2
    
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-lvmcluster-check
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-policy-lvmcluster-check
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: policy-lvmcluster-inform
    ---
    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-policy-lvmcluster-check
    spec:
      clusterConditions:
        - status: "True"
          type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
          - key: mykey
            operator: In
            values:
              - myvalue
    Copy to Clipboard Toggle word wrap

    1
    The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction.
    2
    The namespace field must have the openshift-storage value.
  6. Create the Policy CR by running the following command:

    $ oc create -f <file_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

Verification

  • Check the status of the Policy CRs by running the following command:

    $ oc get policy -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                       REMEDIATION ACTION   COMPLIANCE STATE   AGE
    policy-lvmcluster-delete   enforce              Compliant          15m
    policy-lvmcluster-inform   inform               Compliant          15m
    Copy to Clipboard Toggle word wrap

    Important

    The Policy CRs must be in Compliant state.

5.4.5. Provisioning storage

After you have created the LVM volume groups using the LVMCluster custom resource (CR), you can provision the storage by creating persistent volume claims (PVCs).

To create a PVC, you must create a PersistentVolumeClaim object.

Prerequisites

  • You have created an LVMCluster CR.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a PersistentVolumeClaim object similar to the following:

    Example PersistentVolumeClaim object

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-block-1 
    1
    
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Block 
    2
    
      resources:
        requests:
          storage: 10Gi 
    3
    
      storageClassName: lvms-vg1 
    4
    Copy to Clipboard Toggle word wrap

    1
    Specify a name for the PVC.
    2
    To create a block PVC, set this field to Block. To create a file PVC, set this field to Filesystem.
    3
    Specify the storage size. Logical Volume Manager (LVM) Storage provisions PVCs in units of 1 GiB (gibibytes). The requested storage is rounded up to the nearest GiB. The total storage size you can provision is limited by the size of the LVM thin pool and the overprovisioning factor.
    4
    The value of the storageClassName field must be in the format lvms-<device_class_name> where <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1, you must set the storageClassName field to lvms-vg1.
    Note

    The volumeBindingMode field of the storage class is set to WaitForFirstConsumer.

  3. Create the PVC by running the following command:

    $ oc create -f <file_name> -n <application_namespace>
    Copy to Clipboard Toggle word wrap
    Note

    The created PVCs remain in Pending state until you deploy the workloads that use them.

Verification

  • To verify that the PVC is created, run the following command:

    $ oc get pvc -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-block-1   Bound    pvc-e90169a8-fd71-4eea-93b8-817155f60e47   1Gi        RWO            lvms-vg1       5s
    Copy to Clipboard Toggle word wrap

5.4.6. Ways to scale up the storage of a single-node OpenShift cluster

You can scale up the storage of a single-node OpenShift cluster by adding new devices to the existing node.

To add a new device to the existing node on a single-node OpenShift cluster, you must add the path to the new device in the deviceSelector field of the LVMCluster custom resource (CR).

Important

You can add the deviceSelector field in the LVMCluster CR only while creating the LVMCluster CR. If you have not added the deviceSelector field while creating the LVMCluster CR, you must delete the LVMCluster CR and create a new LVMCluster CR containing the deviceSelector field.

If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available.

You can scale up the storage capacity of the existing node on a single-node OpenShift cluster by using the OpenShift CLI (oc).

Prerequisites

  • You have additional unused devices on the single-node OpenShift cluster to be used by Logical Volume Manager (LVM) Storage.
  • You have installed the OpenShift CLI (oc).
  • You have created an LVMCluster custom resource (CR).

Procedure

  1. Edit the LVMCluster CR by running the following command:

    $ oc edit <lvmcluster_file_name> -n <namespace>
    Copy to Clipboard Toggle word wrap
  2. Add the path to the new device in the deviceSelector field:

    Example LVMCluster CR

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
      storage:
        deviceClasses:
    # ...
          deviceSelector: 
    1
    
            paths: 
    2
    
            - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
            optionalPaths: 
    3
    
            - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths, LVM Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the device path exists.
    2
    Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state.
    3
    Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error.
    Important

    After a device is added to the LVM volume group, it cannot be removed.

  3. Save the LVMCluster CR.

You can scale up the storage capacity of the existing node on a single-node OpenShift cluster by using the OpenShift Container Platform web console.

Prerequisites

  • You have additional unused devices on the single-node OpenShift cluster to be used by Logical Volume Manager (LVM) Storage.
  • You have created an LVMCluster custom resource (CR).

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. Click LVM Storage in the openshift-storage namespace.
  4. Click the LVMCluster tab to view the LVMCluster CR created on the cluster.
  5. From the Actions menu, select Edit LVMCluster.
  6. Click the YAML tab.
  7. Edit the LVMCluster CR to add the new device path in the deviceSelector field:

    Example LVMCluster CR

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
      storage:
        deviceClasses:
    # ...
          deviceSelector: 
    1
    
            paths: 
    2
    
            - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
            optionalPaths: 
    3
    
            - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths, LVM Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the device path exists.
    2
    Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state.
    3
    Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error.
    Important

    After a device is added to the LVM volume group, it cannot be removed.

  8. Click Save.

You can scale up the storage capacity of the existing node on single-node OpenShift clusters by using RHACM.

Prerequisites

  • You have access to the RHACM cluster using an account with cluster-admin privileges.
  • You have created an LVMCluster custom resource (CR) by using RHACM.
  • You have additional unused devices on each single-node OpenShift cluster to be used by Logical Volume Manager (LVM) Storage.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Edit the LVMCluster CR that you created using RHACM by running the following command:

    $ oc edit -f <file_name> -n <namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <file_name> with the name of the LVMCluster CR.
  3. In the LVMCluster CR, add the path to the new device in the deviceSelector field.

    Example LVMCluster CR:

    apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: lvms
    spec:
      object-templates:
         - complianceType: musthave
           objectDefinition:
             apiVersion: lvm.topolvm.io/v1alpha1
             kind: LVMCluster
             metadata:
               name: my-lvmcluster
               namespace: openshift-storage
             spec:
               storage:
                 deviceClasses:
    # ...
                   deviceSelector: 
    1
    
                     paths: 
    2
    
                     - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
                     optionalPaths: 
    3
    
                     - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths, LVM Storage adds the unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the device path exists.
    2
    Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state.
    3
    Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error.
    Important

    After a device is added to the LVM volume group, it cannot be removed.

  4. Save the LVMCluster CR.

5.4.7. Expanding a persistent volume claim

After scaling up the storage of a cluster, you can expand the existing persistent volume claims (PVCs).

To expand a PVC, you must update the requests.storage field in the PVC.

Prerequisites

  • Dynamic provisioning is used.
  • The StorageClass object associated with the PVC has the allowVolumeExpansion field set to true.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Update the value of the spec.resources.requests.storage field to a value that is greater than the current value by running the following command:

    $ oc patch pvc <pvc_name> -n <application_namespace> -p \
    1
    
    '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}' --type=merge 
    2
    Copy to Clipboard Toggle word wrap
    1
    Replace <pvc_name> with the name of the PVC that you want to expand.
    2
    Replace <desired_size> with the new size to expand the PVC.

Verification

  • To verify that resizing is completed, run the following command:

    $ oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}
    Copy to Clipboard Toggle word wrap

    Logical Volume Manager (LVM) Storage adds the Resizing condition to the PVC during expansion. It deletes the Resizing condition after the PVC expansion.

5.4.8. Deleting a persistent volume claim

You can delete a persistent volume claim (PVC) by using the OpenShift CLI (oc).

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the PVC by running the following command:

    $ oc delete pvc <pvc_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the PVC is deleted, run the following command:

    $ oc get pvc -n <namespace>
    Copy to Clipboard Toggle word wrap

    The deleted PVC must not be present in the output of this command.

5.4.9. About volume snapshots

You can create snapshots of persistent volume claims (PVCs) that are provisioned by LVM Storage.

You can perform the following actions using the volume snapshots:

  • Back up your application data.

    Important

    Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you must move the snapshots to a secure location. You can use OpenShift API for Data Protection (OADP) backup and restore solutions. For information on OADP, see "OADP features".

  • Revert to a state at which the volume snapshot was taken.
Note

You can also create volume snapshots of volume clones.

5.4.9.1. Creating volume snapshots

You can create volume snapshots based on the available capacity of the thin pool and the over-provisioning limits. To create a volume snapshot, you must create a VolumeSnapshot object.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot.
  • You stopped all the I/O to the PVC.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a VolumeSnapshot object:

    Example VolumeSnapshot object

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: lvm-block-1-snap 
    1
    
    spec:
      source:
        persistentVolumeClaimName: lvm-block-1 
    2
    
      volumeSnapshotClassName: lvms-vg1 
    3
    Copy to Clipboard Toggle word wrap

    1
    Specify a name for the volume snapshot.
    2
    Specify the name of the source PVC. LVM Storage creates a snapshot of this PVC.
    3
    Set this field to the name of a volume snapshot class.
    Note

    To get the list of available volume snapshot classes, run the following command:

    $ oc get volumesnapshotclass
    Copy to Clipboard Toggle word wrap
  3. Create the volume snapshot in the namespace where you created the source PVC by running the following command:

    $ oc create -f <file_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

    LVM Storage creates a read-only copy of the PVC as a volume snapshot.

Verification

  • To verify that the volume snapshot is created, run the following command:

    $ oc get volumesnapshot -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME               READYTOUSE   SOURCEPVC     SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                                    CREATIONTIME   AGE
    lvm-block-1-snap   true         lvms-test-1                           1Gi           lvms-vg1        snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91   19s            19s
    Copy to Clipboard Toggle word wrap

    The value of the READYTOUSE field for the volume snapshot that you created must be true.

5.4.9.2. Restoring volume snapshots

To restore a volume snapshot, you must create a persistent volume claim (PVC) with the dataSource.name field set to the name of the volume snapshot.

The restored PVC is independent of the volume snapshot and the source PVC.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have created a volume snapshot.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a PersistentVolumeClaim object with the configuration to restore the volume snapshot:

    Example PersistentVolumeClaim object to restore a volume snapshot

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: lvm-block-1-restore
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Block
      Resources:
        Requests:
          storage: 2Gi 
    1
    
      storageClassName: lvms-vg1 
    2
    
      dataSource:
        name: lvm-block-1-snap 
    3
    
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
    Copy to Clipboard Toggle word wrap

    1
    Specify the storage size of the PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot.
    2
    Set this field to the value of the storageClassName field in the source PVC of the volume snapshot that you want to restore.
    3
    Set this field to the name of the volume snapshot that you want to restore.
  3. Create the PVC in the namespace where you created the volume snapshot by running the following command:

    $ oc create -f <file_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the volume snapshot is restored, create a workload using the restored PVC and then run the following command:

    $ oc get pvc -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-block-1-restore   Bound    pvc-e90169a8-fd71-4eea-93b8-817155f60e47   1Gi        RWO            lvms-vg1       5s
    Copy to Clipboard Toggle word wrap

5.4.9.3. Deleting volume snapshots

You can delete the volume snapshots of the persistent volume claims (PVCs).

Important

When you delete a persistent volume claim (PVC), LVM Storage deletes only the PVC, but not the snapshots of the PVC.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have ensured that the volume snpashot that you want to delete is not in use.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the volume snapshot by running the following command:

    $ oc delete volumesnapshot <volume_snapshot_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the volume snapshot is deleted, run the following command:

    $ oc get volumesnapshot -n <namespace>
    Copy to Clipboard Toggle word wrap

    The deleted volume snapshot must not be present in the output of this command.

5.4.10. About volume clones

A volume clone is a duplicate of an existing persistent volume claim (PVC). You can create a volume clone to make a point-in-time copy of the data.

5.4.10.1. Creating volume clones

To create a clone of a persistent volume claim (PVC), you must create a PersistentVolumeClaim object in the namespace where you created the source PVC.

Important

The cloned PVC has write access.

Prerequisites

  • You ensured that the source PVC is in Bound state. This is required for a consistent clone.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a PersistentVolumeClaim object:

    Example PersistentVolumeClaim object to create a volume clone

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: lvm-pvc-clone
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: lvms-vg1 
    1
    
      volumeMode: Filesystem 
    2
    
      dataSource:
        kind: PersistentVolumeClaim
        name: lvm-pvc 
    3
    
      resources:
        requests:
          storage: 1Gi 
    4
    Copy to Clipboard Toggle word wrap

    1
    Set this field to the value of the storageClassName field in the source PVC.
    2
    Set this field to the volumeMode field in the source PVC.
    3
    Specify the name of the source PVC.
    4
    Specify the storage size for the cloned PVC. The storage size of the cloned PVC must be greater than or equal to the storage size of the source PVC.
  3. Create the PVC in the namespace where you created the source PVC by running the following command:

    $ oc create -f <file_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the volume clone is created, create a workload using the cloned PVC and then run the following command:

    $ oc get pvc -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-block-1-clone   Bound    pvc-e90169a8-fd71-4eea-93b8-817155f60e47   1Gi        RWO            lvms-vg1       5s
    Copy to Clipboard Toggle word wrap

5.4.10.2. Deleting volume clones

You can delete volume clones.

Important

When you delete a persistent volume claim (PVC), LVM Storage deletes only the source persistent volume claim (PVC) but not the clones of the PVC.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the cloned PVC by running the following command:

    # oc delete pvc <clone_pvc_name> -n <namespace>
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the volume clone is deleted, run the following command:

    $ oc get pvc -n <namespace>
    Copy to Clipboard Toggle word wrap

    The deleted volume clone must not be present in the output of this command.

5.4.11. Updating LVM Storage on a single-node OpenShift cluster

You can update LVM Storage to ensure compatibility with the single-node OpenShift version.

Prerequisites

  • You have updated your single-node OpenShift cluster.
  • You have installed a previous version of LVM Storage.
  • You have installed the OpenShift CLI (oc).
  • You have access to the cluster using an account with cluster-admin permissions.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Update the Subscription custom resource (CR) that you created while installing LVM Storage by running the following command:

    $ oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{"spec":{"channel":"<update_channel>"}}' 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <update_channel> with the version of LVM Storage that you want to install. For example, stable-4.14.
  3. View the update events to check that the installation is complete by running the following command:

    $ oc get events -n openshift-storage
    Copy to Clipboard Toggle word wrap

    Example output

    ...
    8m13s       Normal    RequirementsUnknown   clusterserviceversion/lvms-operator.v4.14   requirements not yet checked
    8m11s       Normal    RequirementsNotMet    clusterserviceversion/lvms-operator.v4.14   one or more requirements couldn't be found
    7m50s       Normal    AllRequirementsMet    clusterserviceversion/lvms-operator.v4.14   all requirements found, attempting install
    7m50s       Normal    InstallSucceeded      clusterserviceversion/lvms-operator.v4.14   waiting for install components to report healthy
    7m49s       Normal    InstallWaiting        clusterserviceversion/lvms-operator.v4.14   installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" waiting for 1 outdated replica(s) to be terminated
    7m39s       Normal    InstallSucceeded      clusterserviceversion/lvms-operator.v4.14   install strategy completed with no errors
    ...
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the LVM Storage version by running the following command:

    $ oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'
    Copy to Clipboard Toggle word wrap

    Example output

    lvms-operator.v4.14
    Copy to Clipboard Toggle word wrap

5.4.12. Monitoring LVM Storage

To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:

openshift.io/cluster-monitoring=true
Copy to Clipboard Toggle word wrap
Important

For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics.

5.4.12.1. Metrics

You can monitor LVM Storage by viewing the metrics.

The following table describes the topolvm metrics:

Expand
Table 5.6. topolvm metrics
AlertDescription

topolvm_thinpool_data_percent

Indicates the percentage of data space used in the LVM thinpool.

topolvm_thinpool_metadata_percent

Indicates the percentage of metadata space used in the LVM thinpool.

topolvm_thinpool_size_bytes

Indicates the size of the LVM thin pool in bytes.

topolvm_volumegroup_available_bytes

Indicates the available space in the LVM volume group in bytes.

topolvm_volumegroup_size_bytes

Indicates the size of the LVM volume group in bytes.

topolvm_thinpool_overprovisioned_available

Indicates the available over-provisioned size of the LVM thin pool in bytes.

Note

Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.

5.4.12.2. Alerts

When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.

LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:

Expand
Table 5.7. LVM Storage alerts
AlertDescription

VolumeGroupUsageAtThresholdNearFull

This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required.

VolumeGroupUsageAtThresholdCritical

This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required.

ThinPoolDataUsageAtThresholdNearFull

This alert is triggered when the thin pool data usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolDataUsageAtThresholdCritical

This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdNearFull

This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdCritical

This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

5.4.13. Uninstalling LVM Storage by using the CLI

You can uninstall LVM Storage by using the OpenShift CLI (oc).

Prerequisites

  • You have logged in to oc as a user with cluster-admin permissions.
  • You deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
  • You deleted the LVMCluster custom resource (CR).

Procedure

  1. Get the currentCSV value for the LVM Storage Operator by running the following command:

    $ oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV
    Copy to Clipboard Toggle word wrap

    Example output

    currentCSV: lvms-operator.v4.15.3
    Copy to Clipboard Toggle word wrap

  2. Delete the subscription by running the following command:

    $ oc delete subscription.operators.coreos.com lvms-operator -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    subscription.operators.coreos.com "lvms-operator" deleted
    Copy to Clipboard Toggle word wrap

  3. Delete the CSV for the LVM Storage Operator in the target namespace by running the following command:

    $ oc delete clusterserviceversion <currentCSV> -n <namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <currentCSV> with the currentCSV value for the LVM Storage Operator.

    Example output

    clusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deleted
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the LVM Storage Operator is uninstalled, run the following command:

    $ oc get csv -n <namespace>
    Copy to Clipboard Toggle word wrap

    If the LVM Storage Operator was successfully uninstalled, it does not appear in the output of this command.

5.4.14. Uninstalling LVM Storage by using the web console

You can uninstall Logical Volume Manager (LVM) Storage using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the single-node OpenShift cluster as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
  • You have deleted the LVMCluster custom resource (CR).

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. Click LVM Storage in the openshift-storage namespace.
  4. Click the Details tab.
  5. From the Actions menu, click Uninstall Operator.
  6. Optional: When prompted, select the Delete all operand instances for this operator checkbox to delete the operand instances for LVM Storage.
  7. Click Uninstall.

5.4.15. Uninstalling LVM Storage installed using RHACM

To uninstall Logical Volume Manager (LVM) Storage that you installed using RHACM, you must delete the RHACM Policy custom resource (CR) that you created for installing and configuring LVM Storage.

Prerequisites

  • You have access to the RHACM cluster as a user with cluster-admin permissions.
  • You have deleted the following resources provisioned by LVM Storage:

    • Persistent volume claims (PVCs)
    • Volume snapshots
    • Volume clones

      You have also deleted any applications that are using these resources.

  • You have deleted the LVMCluster CR that you created using RHACM.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the RHACM Policy CR that you created for installing and configuring LVM Storage by running the following command:

    $ oc delete -f <policy> -n <namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <policy> with the name of the Policy CR YAML file.
  3. Create a Policy CR YAML file with the configuration to uninstall LVM Storage.

    Example Policy CR to uninstall LVM Storage

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-uninstall-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-uninstall-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-uninstall-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: uninstall-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: uninstall-lvms
    spec:
      disabled: false
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: uninstall-lvms
          spec:
            object-templates:
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: v1
                kind: Namespace
                metadata:
                  name: openshift-storage
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms-operator
                  namespace: openshift-storage
            remediationAction: enforce
            severity: low
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: policy-remove-lvms-crds
          spec:
            object-templates:
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: logicalvolumes.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmclusters.lvm.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmvolumegroupnodestatuses.lvm.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmvolumegroups.lvm.topolvm.io
            remediationAction: enforce
            severity: high
    Copy to Clipboard Toggle word wrap

  4. Create the Policy CR by running the following command:

    $ oc create -f <policy> -ns <namespace>
    Copy to Clipboard Toggle word wrap

When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.

Procedure

  • Run the must-gather command from the client connected to the LVM Storage cluster:

    $ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.14 --dest-dir=<directory_name>
    Copy to Clipboard Toggle word wrap

5.4.17. Troubleshooting persistent storage

While configuring persistent storage using Logical Volume Manager (LVM) Storage, you can encounter several issues that require troubleshooting.

5.4.17.1. Investigating a PVC stuck in the Pending state

A persistent volume claim (PVC) can get stuck in the Pending state for the following reasons:

  • Insufficient computing resources.
  • Network problems.
  • Mismatched storage class or node selector.
  • No available persistent volumes (PVs).
  • The node with the PV is in the Not Ready state.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  1. Retrieve the list of PVCs by running the following command:

    $ oc get pvc
    Copy to Clipboard Toggle word wrap

    Example output

    NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvms-test   Pending                                      lvms-vg1       11s
    Copy to Clipboard Toggle word wrap

  2. Inspect the events associated with a PVC stuck in the Pending state by running the following command:

    $ oc describe pvc <pvc_name> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <pvc_name> with the name of the PVC. For example, lvms-vg1.

    Example output

    Type     Reason              Age               From                         Message
    ----     ------              ----              ----                         -------
    Warning  ProvisioningFailed  4s (x2 over 17s)  persistentvolume-controller  storageclass.storage.k8s.io "lvms-vg1" not found
    Copy to Clipboard Toggle word wrap

5.4.17.2. Recovering from a missing storage class

If you encounter the storage class not found error, check the LVMCluster custom resource (CR) and ensure that all the Logical Volume Manager (LVM) Storage pods are in the Running state.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  1. Verify that the LVMCluster CR is present by running the following command:

    $ oc get lvmcluster -n openshift-storage
    Copy to Clipboard Toggle word wrap

    Example output

    NAME            AGE
    my-lvmcluster   65m
    Copy to Clipboard Toggle word wrap

  2. If the LVMCluster CR is not present, create an LVMCluster CR. For more information, see "Ways to create an LVMCluster custom resource".
  3. In the openshift-storage namespace, check that all the LVM Storage pods are in the Running state by running the following command:

    $ oc get pods -n openshift-storage
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  READY   STATUS    RESTARTS      AGE
    lvms-operator-7b9fb858cb-6nsml        3/3     Running   0             70m
    topolvm-controller-5dd9cf78b5-7wwr2   5/5     Running   0             66m
    topolvm-node-dr26h                    4/4     Running   0             66m
    vg-manager-r6zdv                      1/1     Running   0             66m
    Copy to Clipboard Toggle word wrap

    The output of this command must contain a running instance of the following pods:

    • lvms-operator
    • vg-manager
    • topolvm-controller
    • topolvm-node

      If the topolvm-node pod is stuck in the Init state, it is due to a failure to locate an available disk for LVM Storage to use. To retrieve the necessary information to troubleshoot this issue, review the logs of the vg-manager pod by running the following command:

      $ oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage
      Copy to Clipboard Toggle word wrap

5.4.17.3. Recovering from node failure

A persistent volume claim (PVC) can be stuck in the Pending state due to a node failure in the cluster.

To identify the failed node, you can examine the restart count of the topolvm-node pod. An increased restart count indicates potential problems with the underlying node, which might require further investigation and troubleshooting.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  • Examine the restart count of the topolvm-node pod instances by running the following command:

    $ oc get pods -n openshift-storage
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  READY   STATUS    RESTARTS      AGE
    lvms-operator-7b9fb858cb-6nsml        3/3     Running   0             70m
    topolvm-controller-5dd9cf78b5-7wwr2   5/5     Running   0             66m
    topolvm-node-dr26h                    4/4     Running   0             66m
    topolvm-node-54as8                    4/4     Running   0             66m
    topolvm-node-78fft                    4/4     Running   17 (8s ago)   66m
    vg-manager-r6zdv                      1/1     Running   0             66m
    vg-manager-990ut                      1/1     Running   0             66m
    vg-manager-an118                      1/1     Running   0             66m
    Copy to Clipboard Toggle word wrap

Next steps

  • If the PVC is stuck in the Pending state even after you have resolved any issues with the node, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".

5.4.17.4. Recovering from disk failure

If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there can be a problem with the underlying volume or disk.

Disk and volume provisioning issues result with a generic error message such as Failed to provision volume with storage class <storage_class_name>. The generic error message is followed by a specific volume failure error message.

The following table describes the volume failure error messages:

Expand
Table 5.8. Volume failure error messages
Error messageDescription

Failed to check volume existence

Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures.

Failed to bind volume

Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC.

FailedMount or FailedAttachVolume

This error indicates problems when trying to mount the volume to a node. If the disk has failed, this error can appear when a pod tries to use the PVC.

FailedUnMount

This error indicates problems when trying to unmount a volume from a node. If the disk has failed, this error can appear when a pod tries to use the PVC.

Volume is already exclusively attached to one node and cannot be attached to another

This error can appear with storage solutions that do not support ReadWriteMany access modes.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  1. Inspect the events associated with a PVC by running the following command:

    $ oc describe pvc <pvc_name> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <pvc_name> with the name of the PVC.
  2. Establish a direct connection to the host where the problem is occurring.
  3. Resolve the disk issue.

Next steps

  • If the volume failure messages persist or recur even after you have resolved the issue with the disk, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".

5.4.17.5. Performing a forced clean-up

If the disk or node-related problems persist even after you have completed the troubleshooting procedures, you must perform a forced clean-up. A forced clean-up is used to address persistent issues and ensure the proper functioning of Logical Volume Manager (LVM) Storage.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.
  • You have deleted all the persistent volume claims (PVCs) that were created by using LVM Storage.
  • You have stopped the pods that are using the PVCs that were created by using LVM Storage.

Procedure

  1. Switch to the openshift-storage namespace by running the following command:

    $ oc project openshift-storage
    Copy to Clipboard Toggle word wrap
  2. Check if the LogicalVolume custom resources (CRs) are present by running the following command:

    $ oc get logicalvolume
    Copy to Clipboard Toggle word wrap
    1. If the LogicalVolume CRs are present, delete them by running the following command:

      $ oc delete logicalvolume <name> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <name> with the name of the LogicalVolume CR.
    2. After deleting the LogicalVolume CRs, remove their finalizers by running the following command:

      $ oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <name> with the name of the LogicalVolume CR.
  3. Check if the LVMVolumeGroup CRs are present by running the following command:

    $ oc get lvmvolumegroup
    Copy to Clipboard Toggle word wrap
    1. If the LVMVolumeGroup CRs are present, delete them by running the following command:

      $ oc delete lvmvolumegroup <name> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <name> with the name of the LVMVolumeGroup CR.
    2. After deleting the LVMVolumeGroup CRs, remove their finalizers by running the following command:

      $ oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <name> with the name of the LVMVolumeGroup CR.
  4. Delete any LVMVolumeGroupNodeStatus CRs by running the following command:

    $ oc delete lvmvolumegroupnodestatus --all
    Copy to Clipboard Toggle word wrap
  5. Delete the LVMCluster CR by running the following command:

    $ oc delete lvmcluster --all
    Copy to Clipboard Toggle word wrap
    1. After deleting the LVMCluster CR, remove its finalizer by running the following command:

      $ oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <name> with the name of the LVMCluster CR.
Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat