Questo contenuto non è disponibile nella lingua selezionata.

Chapter 4. Configuring persistent storage


4.1. Persistent storage using AWS Elastic Block Store

OpenShift Container Platform supports Amazon Elastic Block Store (EBS) volumes. You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can dynamically provision Amazon EBS volumes. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS. By default, newly created clusters using OpenShift Container Platform version 4.10 and later use gp3 storage and the AWS EBS CSI driver.

Important

High-availability of storage in the infrastructure is left to the underlying storage provider.

Important

OpenShift Container Platform 4.12 and later provides automatic migration for the AWS Block in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

4.1.1. Creating the EBS storage class

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.

4.1.2. Creating the persistent volume claim

Prerequisites

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
  2. In the persistent volume claims overview, click Create Persistent Volume Claim.
  3. Define the desired options on the page that appears.

    1. Select the previously-created storage class from the drop-down menu.
    2. Enter a unique name for the storage claim.
    3. Select the access mode. This selection determines the read and write access for the storage claim.
    4. Define the size of the storage claim.
  4. Click Create to create the persistent volume claim and generate a persistent volume.

4.1.3. Volume format

Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.

This verification enables you to use unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.

4.1.4. Maximum number of EBS volumes on a node

By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits. The volume limit depends on the instance type.

Important

As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, which means you could have up to 39 EBS volumes of each type.

For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see AWS Elastic Block Store CSI Driver Operator.

4.1.5. Encrypting container persistent volumes on AWS with a KMS key

Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS.

Prerequisites

  • Underlying infrastructure must contain storage.
  • You must create a customer KMS key on AWS.

Procedure

  1. Create a storage class:

    $ cat << EOF | oc create -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <storage-class-name> 1
    parameters:
      fsType: ext4 2
      encrypted: "true"
      kmsKeyId: keyvalue 3
    provisioner: ebs.csi.aws.com
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    EOF
    1
    Specifies the name of the storage class.
    2
    File system that is created on provisioned volumes.
    3
    Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the encrypted field is set to true, then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation.
  2. Create a persistent volume claim (PVC) with the storage class specifying the KMS key:

    $ cat << EOF | oc create -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: mypvc
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Filesystem
      storageClassName: <storage-class-name>
      resources:
        requests:
          storage: 1Gi
    EOF
  3. Create workload containers to consume the PVC:

    $ cat << EOF | oc create -f -
    kind: Pod
    metadata:
      name: mypod
    spec:
      containers:
        - name: httpd
          image: quay.io/centos7/httpd-24-centos7
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /mnt/storage
              name: data
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: mypvc
    EOF

4.1.6. Additional resources

4.2. Persistent storage using Azure

OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

OpenShift Container Platform 4.11 and later provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

Important

High availability of storage in the infrastructure is left to the underlying storage provider.

Additional resources

4.2.1. Creating the Azure storage class

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.

Procedure

  1. In the OpenShift Container Platform console, click Storage Storage Classes.
  2. In the storage class overview, click Create Storage Class.
  3. Define the desired options on the page that appears.

    1. Enter a name to reference the storage class.
    2. Enter an optional description.
    3. Select the reclaim policy.
    4. Select kubernetes.io/azure-disk from the drop down list.

      1. Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Premium_LRS, Standard_LRS, StandardSSD_LRS, and UltraSSD_LRS.
      2. Enter the kind of account. Valid options are shared, dedicated, and managed.

        Important

        Red Hat only supports the use of kind: Managed in the storage class.

        With Shared and Dedicated, Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes.

    5. Enter additional parameters for the storage class as desired.
  4. Click Create to create the storage class.

Additional resources

4.2.2. Creating the persistent volume claim

Prerequisites

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
  2. In the persistent volume claims overview, click Create Persistent Volume Claim.
  3. Define the desired options on the page that appears.

    1. Select the previously-created storage class from the drop-down menu.
    2. Enter a unique name for the storage claim.
    3. Select the access mode. This selection determines the read and write access for the storage claim.
    4. Define the size of the storage claim.
  4. Click Create to create the persistent volume claim and generate a persistent volume.

4.2.3. Volume format

Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.

This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.

4.2.4. Machine sets that deploy machines with ultra disks using PVCs

You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.

Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.

4.2.4.1. Creating machines with ultra disks by using machine sets

You can deploy machines with ultra disks on Azure by editing your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  1. Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command:

    $ oc edit machineset <machine-set-name>

    where <machine-set-name> is the machine set that you want to provision machines with ultra disks.

  2. Add the following lines in the positions indicated:

    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    spec:
      template:
        spec:
          metadata:
            labels:
              disk: ultrassd 1
          providerSpec:
            value:
              ultraSSDCapability: Enabled 2
    1
    Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value.
    2
    These lines enable the use of ultra disks.
  3. Create a machine set using the updated configuration by running the following command:

    $ oc create -f <machine-set-name>.yaml
  4. Create a storage class that contains the following YAML definition:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ultra-disk-sc 1
    parameters:
      cachingMode: None
      diskIopsReadWrite: "2000" 2
      diskMbpsReadWrite: "320" 3
      kind: managed
      skuname: UltraSSD_LRS
    provisioner: disk.csi.azure.com 4
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer 5
    1
    Specify the name of the storage class. This procedure uses ultra-disk-sc for this value.
    2
    Specify the number of IOPS for the storage class.
    3
    Specify the throughput in MBps for the storage class.
    4
    For Azure Kubernetes Service (AKS) version 1.21 or later, use disk.csi.azure.com. For earlier versions of AKS, use kubernetes.io/azure-disk.
    5
    Optional: Specify this parameter to wait for the creation of the pod that will use the disk.
  5. Create a persistent volume claim (PVC) to reference the ultra-disk-sc storage class that contains the following YAML definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: ultra-disk 1
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: ultra-disk-sc 2
      resources:
        requests:
          storage: 4Gi 3
    1
    Specify the name of the PVC. This procedure uses ultra-disk for this value.
    2
    This PVC references the ultra-disk-sc storage class.
    3
    Specify the size for the storage class. The minimum value is 4Gi.
  6. Create a pod that contains the following YAML definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx-ultra
    spec:
      nodeSelector:
        disk: ultrassd 1
      containers:
      - name: nginx-ultra
        image: alpine:latest
        command:
          - "sleep"
          - "infinity"
        volumeMounts:
        - mountPath: "/mnt/azure"
          name: volume
      volumes:
        - name: volume
          persistentVolumeClaim:
            claimName: ultra-disk 2
    1
    Specify the label of the machine set that enables the use of ultra disks. This procedure uses disk.ultrassd for this value.
    2
    This pod references the ultra-disk PVC.

Verification

  1. Validate that the machines are created by running the following command:

    $ oc get machines

    The machines should be in the Running state.

  2. For a machine that is running and has a node attached, validate the partition by running the following command:

    $ oc debug node/<node-name> -- chroot /host lsblk

    In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.

Next steps

  • To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ssd-benchmark1
    spec:
      containers:
      - name: ssd-benchmark1
        image: nginx
        ports:
          - containerPort: 80
            name: "http-server"
        volumeMounts:
        - name: lun0p1
          mountPath: "/tmp"
      volumes:
        - name: lun0p1
          hostPath:
            path: /var/lib/lun0p1
            type: DirectoryOrCreate
      nodeSelector:
        disktype: ultrassd

4.2.4.2. Troubleshooting resources for machine sets that enable ultra disks

Use the information in this section to understand and recover from issues you might encounter.

4.2.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk

If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating state and an alert is triggered.

For example, if the additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears:

StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
  • To resolve this issue, describe the pod by running the following command:

    $ oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>

4.3. Persistent storage using Azure File

OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically.

Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications.

Important

High availability of storage in the infrastructure is left to the underlying storage provider.

Important

Azure File volumes use Server Message Block.

Important

OpenShift Container Platform 4.13 and later provides automatic migration for the Azure File in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

Additional resources

4.3.1. Create the Azure File share persistent volume claim

To create the persistent volume claim, you must first define a Secret object that contains the Azure account and key. This secret is used in the PersistentVolume definition, and will be referenced by the persistent volume claim for use in applications.

Prerequisites

  • An Azure File share exists.
  • The credentials to access this share, specifically the storage account and key, are available.

Procedure

  1. Create a Secret object that contains the Azure File credentials:

    $ oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1
      --from-literal=azurestorageaccountkey=<storage-account-key> 2
    1
    The Azure File storage account name.
    2
    The Azure File storage account key.
  2. Create a PersistentVolume object that references the Secret object you created:

    apiVersion: "v1"
    kind: "PersistentVolume"
    metadata:
      name: "pv0001" 1
    spec:
      capacity:
        storage: "5Gi" 2
      accessModes:
        - "ReadWriteOnce"
      storageClassName: azure-file-sc
      azureFile:
        secretName: <secret-name> 3
        shareName: share-1 4
        readOnly: false
    1
    The name of the persistent volume.
    2
    The size of this persistent volume.
    3
    The name of the secret that contains the Azure File share credentials.
    4
    The name of the Azure File share.
  3. Create a PersistentVolumeClaim object that maps to the persistent volume you created:

    apiVersion: "v1"
    kind: "PersistentVolumeClaim"
    metadata:
      name: "claim1" 1
    spec:
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          storage: "5Gi" 2
      storageClassName: azure-file-sc 3
      volumeName: "pv0001" 4
    1
    The name of the persistent volume claim.
    2
    The size of this persistent volume claim.
    3
    The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the PersistentVolume definition.
    4
    The name of the existing PersistentVolume object that references the Azure File share.

4.3.2. Mount the Azure File share in a pod

After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.

Prerequisites

  • A persistent volume claim exists that is mapped to the underlying Azure File share.

Procedure

  • Create a pod that mounts the existing persistent volume claim:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-name 1
    spec:
      containers:
        ...
        volumeMounts:
        - mountPath: "/data" 2
          name: azure-file-share
      volumes:
        - name: azure-file-share
          persistentVolumeClaim:
            claimName: claim1 3
    1
    The name of the pod.
    2
    The path to mount the Azure File share inside the pod. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    3
    The name of the PersistentVolumeClaim object that has been previously created.

4.4. Persistent storage using Cinder

OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed.

Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

OpenShift Container Platform 4.11 and later provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

Additional resources

  • For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder.

4.4.1. Manual provisioning with Cinder

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Prerequisites

  • OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP)
  • Cinder volume ID

4.4.1.1. Creating the persistent volume

You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform:

Procedure

  1. Save your object definition to a file.

    cinder-persistentvolume.yaml

    apiVersion: "v1"
    kind: "PersistentVolume"
    metadata:
      name: "pv0001" 1
    spec:
      capacity:
        storage: "5Gi" 2
      accessModes:
        - "ReadWriteOnce"
      cinder: 3
        fsType: "ext3" 4
        volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5

    1
    The name of the volume that is used by persistent volume claims or pods.
    2
    The amount of storage allocated to this volume.
    3
    Indicates cinder for Red Hat OpenStack Platform (RHOSP) Cinder volumes.
    4
    The file system that is created when the volume is mounted for the first time.
    5
    The Cinder volume to use.
    Important

    Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure.

  2. Create the object definition file you saved in the previous step.

    $ oc create -f cinder-persistentvolume.yaml

4.4.1.2. Persistent volume formatting

You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use.

Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.

4.4.1.3. Cinder volume security

If you use Cinder PVs in your application, configure security for their deployment configurations.

Prerequisites

  • An SCC must be created that uses the appropriate fsGroup strategy.

Procedure

  1. Create a service account and add it to the SCC:

    $ oc create serviceaccount <service_account>
    $ oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>
  2. In your application’s deployment configuration, provide the service account name and securityContext:

    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: frontend-1
    spec:
      replicas: 1  1
      selector:    2
        name: frontend
      template:    3
        metadata:
          labels:  4
            name: frontend 5
        spec:
          containers:
          - image: openshift/hello-openshift
            name: helloworld
            ports:
            - containerPort: 8080
              protocol: TCP
          restartPolicy: Always
          serviceAccountName: <service_account> 6
          securityContext:
            fsGroup: 7777 7
    1
    The number of copies of the pod to run.
    2
    The label selector of the pod to run.
    3
    A template for the pod that the controller creates.
    4
    The labels on the pod. They must include labels from the label selector.
    5
    The maximum name length after expanding any parameters is 63 characters.
    6
    Specifies the service account you created.
    7
    Specifies an fsGroup for the pods.

4.5. Persistent storage using Fibre Channel

OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed.

Important

Persistent storage using Fibre Channel is not supported on ARM architecture based infrastructures.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

High availability of storage in the infrastructure is left to the underlying storage provider.

Additional resources

4.5.1. Provisioning

To provision Fibre Channel volumes using the PersistentVolume API the following must be available:

  • The targetWWNs (array of Fibre Channel target’s World Wide Names).
  • A valid LUN number.
  • The filesystem type.

A persistent volume and a LUN have a one-to-one mapping between them.

Prerequisites

  • Fibre Channel LUNs must exist in the underlying infrastructure.

PersistentVolume object definition

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  fc:
    wwids: [scsi-3600508b400105e210000900000490000] 1
    targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2
    lun: 2 3
    fsType: ext4

1
World wide identifiers (WWIDs). Either FC wwids or a combination of FC targetWWNs and lun must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83) or Unit Serial Number (page 0x80). FC WWIDs are identified as /dev/disk/by-id/ to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems.
2 3
Fibre Channel WWNs are identified as /dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#>, but you do not need to provide any part of the path leading up to the WWN, including the 0x, and anything after, including the - (hyphen).
Important

Changing the value of the fstype parameter after the volume has been formatted and provisioned can result in data loss and pod failure.

4.5.1.1. Enforcing disk quotas

Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes.

Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.

4.5.1.2. Fibre Channel volume security

Users request storage with a persistent volume claim. This claim only lives in the user’s namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail.

Each Fibre Channel LUN must be accessible by all nodes in the cluster.

4.6. Persistent storage using FlexVolume

Important

FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver.

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers.

To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications.

Pods interact with FlexVolume drivers through the flexvolume in-tree plugin.

Additional resources

4.6.1. About FlexVolume drivers

A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume object with flexVolume as the source.

Important

Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume.

4.6.2. FlexVolume driver example

The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data.

The FlexVolume driver contains:

  • All flexVolume.options.
  • Some options from flexVolume prefixed by kubernetes.io/, such as fsType and readwrite.
  • The content of the referenced secret, if specified, prefixed by kubernetes.io/secret/.

FlexVolume driver JSON input example

{
	"fooServer": "192.168.0.1:1234", 1
        "fooVolumeName": "bar",
	"kubernetes.io/fsType": "ext4", 2
	"kubernetes.io/readwrite": "ro", 3
	"kubernetes.io/secret/<key name>": "<key value>", 4
	"kubernetes.io/secret/<another key name>": "<another key value>",
}

1
All options from flexVolume.options.
2
The value of flexVolume.fsType.
3
ro/rw based on flexVolume.readOnly.
4
All keys and their values from the secret referenced by flexVolume.secretRef.

OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation.

FlexVolume driver default output example

{
	"status": "<Success/Failure/Not supported>",
	"message": "<Reason for success/failure>"
}

Exit code of the driver should be 0 for success and 1 for error.

Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation.

4.6.3. Installing FlexVolume drivers

FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required.

Prerequisites

  • FlexVolume drivers must implement these operations:

    init

    Initializes the driver. It is called during initialization of all nodes.

    • Arguments: none
    • Executed on: node
    • Expected output: default JSON
    mount

    Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device.

    • Arguments: <mount-dir> <json>
    • Executed on: node
    • Expected output: default JSON
    unmount

    Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting.

    • Arguments: <mount-dir>
    • Executed on: node
    • Expected output: default JSON
    mountdevice
    Mounts a volume’s device to a directory where individual pods can then bind mount.

This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out.

  • Arguments: <mount-dir> <json>
  • Executed on: node
  • Expected output: default JSON

    unmountdevice
    Unmounts a volume’s device from a directory.
  • Arguments: <mount-dir>
  • Executed on: node
  • Expected output: default JSON

    • All other operations should return JSON with {"status": "Not supported"} and exit code 1.

Procedure

To install the FlexVolume driver:

  1. Ensure that the executable file exists on all nodes in the cluster.
  2. Place the executable file at the volume plugin path: /etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver>.

For example, to install the FlexVolume driver for the storage foo, place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo.

4.6.4. Consuming storage using FlexVolume drivers

Each PersistentVolume object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume.

Procedure

  • Use the PersistentVolume object to reference the installed storage.

Persistent volume object definition using FlexVolume drivers example

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001 1
spec:
  capacity:
    storage: 1Gi 2
  accessModes:
    - ReadWriteOnce
  flexVolume:
    driver: openshift.com/foo 3
    fsType: "ext4" 4
    secretRef: foo-secret 5
    readOnly: true 6
    options: 7
      fooServer: 192.168.0.1:1234
      fooVolumeName: bar

1
The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage.
2
The amount of storage allocated to this volume.
3
The name of the driver. This field is mandatory.
4
The file system that is present on the volume. This field is optional.
5
The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional.
6
The read-only flag. This field is optional.
7
The additional options for the FlexVolume driver. In addition to the flags specified by the user in the options field, the following flags are also passed to the executable:
"fsType":"<FS type>",
"readwrite":"<rw>",
"secret/key1":"<secret1>"
...
"secret/keyN":"<secretN>"
Note

Secrets are passed only to mount or unmount call-outs.

4.7. Persistent storage using GCE Persistent Disk

OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.

GCE Persistent Disk volumes can be provisioned dynamically.

Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

OpenShift Container Platform 4.12 and later provides automatic migration for the GCE Persist Disk in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes.

For more information about migration, see CSI automatic migration.

Important

High availability of storage in the infrastructure is left to the underlying storage provider.

Additional resources

4.7.1. Creating the GCE storage class

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.

4.7.2. Creating the persistent volume claim

Prerequisites

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
  2. In the persistent volume claims overview, click Create Persistent Volume Claim.
  3. Define the desired options on the page that appears.

    1. Select the previously-created storage class from the drop-down menu.
    2. Enter a unique name for the storage claim.
    3. Select the access mode. This selection determines the read and write access for the storage claim.
    4. Define the size of the storage claim.
  4. Click Create to create the persistent volume claim and generate a persistent volume.

4.7.3. Volume format

Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.

This verification enables you to use unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.

4.8. Persistent storage using iSCSI

You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI. Some familiarity with Kubernetes and iSCSI is assumed.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.

Important

High-availability of storage in the infrastructure is left to the underlying storage provider.

Important

When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860 and 3260.

Important

Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi. The iscsi-initiator-utils package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS).

For more information, see Managing Storage Devices.

4.8.1. Provisioning

Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API.

PersistentVolume object definition

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
     targetPortal: 10.16.154.81:3260
     iqn: iqn.2014-12.example.server:storage.target00
     lun: 0
     fsType: 'ext4'

4.8.2. Enforcing disk quotas

Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes.

Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi) and be matched with a corresponding volume of equal or greater capacity.

4.8.3. iSCSI volume security

Users request storage with a PersistentVolumeClaim object. This claim only lives in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail.

Each iSCSI LUN must be accessible by all nodes in the cluster.

4.8.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration

Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
    targetPortal: 10.0.0.1:3260
    iqn: iqn.2016-04.test.com:storage.target00
    lun: 0
    fsType: ext4
    chapAuthDiscovery: true 1
    chapAuthSession: true 2
    secretRef:
      name: chap-secret 3
1
Enable CHAP authentication of iSCSI discovery.
2
Enable CHAP authentication of iSCSI session.
3
Specify name of Secrets object with user name + password. This Secret object must be available in all namespaces that can use the referenced volume.

4.8.4. iSCSI multipathing

For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail.

To specify multi-paths in the pod specification, use the portals field. For example:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
    targetPortal: 10.0.0.1:3260
    portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1
    iqn: iqn.2016-04.test.com:storage.target00
    lun: 0
    fsType: ext4
    readOnly: false
1
Add additional target portals using the portals field.

4.8.5. iSCSI custom initiator IQN

Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs.

To specify a custom initiator IQN, use initiatorName field.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
    targetPortal: 10.0.0.1:3260
    portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260']
    iqn: iqn.2016-04.test.com:storage.target00
    lun: 0
    initiatorName: iqn.2016-04.test.com:custom.iqn 1
    fsType: ext4
    readOnly: false
1
Specify the name of the initiator.

4.9. Persistent storage using NFS

OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.

Additional resources

4.9.1. Provisioning

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required.

Procedure

  1. Create an object definition for the PV:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv0001 1
    spec:
      capacity:
        storage: 5Gi 2
      accessModes:
      - ReadWriteOnce 3
      nfs: 4
        path: /tmp 5
        server: 172.17.0.2 6
      persistentVolumeReclaimPolicy: Retain 7
    1
    The name of the volume. This is the PV identity in various oc <command> pod commands.
    2
    The amount of storage allocated to this volume.
    3
    Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes.
    4
    The volume type being used, in this case the nfs plugin.
    5
    The path that is exported by the NFS server.
    6
    The hostname or IP address of the NFS server.
    7
    The reclaim policy for the PV. This defines what happens to a volume when released.
    Note

    Each NFS volume must be mountable by all schedulable nodes in the cluster.

  2. Verify that the PV was created:

    $ oc get pv

    Example output

    NAME     LABELS    CAPACITY     ACCESSMODES   STATUS      CLAIM  REASON    AGE
    pv0001   <none>    5Gi          RWO           Available                    31s

  3. Create a persistent volume claim that binds to the new PV:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: nfs-claim1
    spec:
      accessModes:
        - ReadWriteOnce 1
      resources:
        requests:
          storage: 5Gi 2
      volumeName: pv0001
      storageClassName: ""
    1
    The access modes do not enforce security, but rather act as labels to match a PV to a PVC.
    2
    This claim looks for PVs offering 5Gi or greater capacity.
  4. Verify that the persistent volume claim was created:

    $ oc get pvc

    Example output

    NAME         STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    nfs-claim1   Bound    pv0001   5Gi        RWO                           2m

4.9.2. Enforcing disk quotas

You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume’s server and path is up to the administrator.

Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.

4.9.3. NFS volume security

This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux.

Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes section of their Pod definition.

The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plugin mounts the container’s NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior.

As an example, if the target NFS directory appears on the NFS server as:

$ ls -lZ /opt/nfs -d

Example output

drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0   /opt/nfs

$ id nfsnobody

Example output

uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)

Then the container must match SELinux labels, and either run with a UID of 65534, the nfsnobody owner, or with 5555 in its supplemental groups to access the directory.

Note

The owner ID of 65534 is used as an example. Even though NFS’s root_squash maps root, uid 0, to nfsnobody, uid 65534, NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports.

4.9.3.1. Group IDs

The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod.

Note

To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs.

Because the group ID on the example target NFS directory is 5555, the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example:

spec:
  containers:
    - name:
    ...
  securityContext: 1
    supplementalGroups: [5555] 2
1
securityContext must be defined at the pod level, not under a specific container.
2
An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated.

Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny, meaning that any supplied group ID is accepted without range checking.

As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed.

Note

To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification.

4.9.3.2. User IDs

User IDs can be defined in the container image or in the Pod definition.

Note

It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs.

In the example target NFS directory shown above, the container needs its UID set to 65534, ignoring group IDs for the moment, so the following can be added to the Pod definition:

spec:
  containers: 1
  - name:
  ...
    securityContext:
      runAsUser: 65534 2
1
Pods contain a securityContext definition specific to each container and a pod’s securityContext which applies to all containers defined in the pod.
2
65534 is the nfsnobody user.

Assuming that the project is default and the SCC is restricted, the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons:

  • It requests 65534 as its user ID.
  • All SCCs available to the pod are examined to see which SCC allows a user ID of 65534. While all policies of the SCCs are checked, the focus here is on user ID.
  • Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required.
  • 65534 is not included in the SCC or project’s user ID range.

It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed.

Note

To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification.

4.9.3.3. SELinux

Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default.

For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure.

Prerequisites

  • The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean.

Procedure

  • Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots.

    # setsebool -P virt_use_nfs 1

4.9.3.4. Export settings

To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:

  • Every export must be exported using the following format:

    /<example_fs> *(rw,root_squash)
  • The firewall must be configured to allow traffic to the mount point.

    • For NFSv4, configure the default port 2049 (nfs).

      NFSv4

      # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT

    • For NFSv3, there are three ports to configure: 2049 (nfs), 20048 (mountd), and 111 (portmapper).

      NFSv3

      # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT

      # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT
      # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
  • The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container’s primary UID, or supply the pod group access using supplementalGroups, as shown in the group IDs above.

4.9.4. Reclaiming resources

NFS implements the OpenShift Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume.

By default, PVs are set to Retain.

Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original.

For example, the administrator creates a PV named nfs1:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs1
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.1
    path: "/"

The user creates PVC1, which binds to nfs1. The user then deletes PVC1, releasing claim to nfs1. This results in nfs1 being Released. If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs2
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.1
    path: "/"

Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss.

4.9.5. Additional configuration and troubleshooting

Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply:

NFSv4 mount incorrectly shows all files with ownership of nobody:nobody

  • Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS.
  • See this Red Hat Solution.

Disabling ID mapping on NFSv4

  • On both the NFS client and server, run:

    # echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping

4.10. Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation.

Important

OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide.

4.11. Persistent storage using VMware vSphere volumes

OpenShift Container Platform allows use of VMware vSphere’s Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.

VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image.

Note

OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.

Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

For vSphere:

  • For new installations of OpenShift Container Platform 4.13, or later, automatic migration is enabled by default. Updating to OpenShift Container Platform 4.14 and later also provides automatic migration.

    CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

  • When updating from OpenShift Container Platform 4.12, or earlier, to 4.13, automatic CSI migration for vSphere only occurs if you opt in. If you do not opt in, OpenShift Container Platform defaults to using the in-tree (non-CSI) plugin to provision vSphere storage. Carefully review the indicated consequences before opting in to migration.

Additional resources

4.11.1. Dynamically provisioning VMware vSphere volumes

Dynamically provisioning VMware vSphere volumes is the recommended method.

4.11.2. Prerequisites

  • An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support.

You can use either of the following procedures to dynamically provision these volumes using the default storage class.

4.11.2.1. Dynamically provisioning VMware vSphere volumes using the UI

OpenShift Container Platform installs a default storage class, named thin, that uses the thin disk format for provisioning volumes.

Prerequisites

  • Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
  2. In the persistent volume claims overview, click Create Persistent Volume Claim.
  3. Define the required options on the resulting page.

    1. Select the thin storage class.
    2. Enter a unique name for the storage claim.
    3. Select the access mode to determine the read and write access for the created storage claim.
    4. Define the size of the storage claim.
  4. Click Create to create the persistent volume claim and generate a persistent volume.

4.11.2.2. Dynamically provisioning VMware vSphere volumes using the CLI

OpenShift Container Platform installs a default StorageClass, named thin, that uses the thin disk format for provisioning volumes.

Prerequisites

  • Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure (CLI)

  1. You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml, with the following contents:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc 1
    spec:
      accessModes:
      - ReadWriteOnce 2
      resources:
        requests:
          storage: 1Gi 3
    1
    A unique name that represents the persistent volume claim.
    2
    The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
    3
    The size of the persistent volume claim.
  2. Enter the following command to create the PersistentVolumeClaim object from the file:

    $ oc create -f pvc.yaml

4.11.3. Statically provisioning VMware vSphere volumes

To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework.

Prerequisites

  • Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods:

    • Create using vmkfstools. Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume:

      $ vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk
    • Create using vmware-diskmanager:

      $ shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk
  2. Create a persistent volume that references the VMDKs. Create a file, pv1.yaml, with the PersistentVolume object definition:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv1 1
    spec:
      capacity:
        storage: 1Gi 2
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      vsphereVolume: 3
        volumePath: "[datastore1] volumes/myDisk"  4
        fsType: ext4  5
    1
    The name of the volume. This name is how it is identified by persistent volume claims or pods.
    2
    The amount of storage allocated to this volume.
    3
    The volume type used, with vsphereVolume for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore.
    4
    The existing VMDK volume to use. If you used vmkfstools, you must enclose the datastore name in square brackets, [], in the volume definition, as shown previously.
    5
    The file system type to mount. For example, ext4, xfs, or other file systems.
    Important

    Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure.

  3. Create the PersistentVolume object from the file:

    $ oc create -f pv1.yaml
  4. Create a persistent volume claim that maps to the persistent volume you created in the previous step. Create a file, pvc1.yaml, with the PersistentVolumeClaim object definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc1 1
    spec:
      accessModes:
        - ReadWriteOnce 2
      resources:
       requests:
         storage: "1Gi" 3
      volumeName: pv1 4
    1
    A unique name that represents the persistent volume claim.
    2
    The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
    3
    The size of the persistent volume claim.
    4
    The name of the existing persistent volume.
  5. Create the PersistentVolumeClaim object from the file:

    $ oc create -f pvc1.yaml

4.11.3.1. Formatting VMware vSphere volumes

Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system.

Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs.

4.12. Persistent storage using local storage

4.12.1. Local storage overview

You can use any of the following solutions to provision local storage:

  • HostPath Provisioner (HPP)
  • Local Storage Operator (LSO)
  • Logical Volume Manager (LVM) Storage
Warning

These solutions support provisioning only node-local storage. The workloads are bound to the nodes that provide the storage. If the node becomes unavailable, the workload also becomes unavailable. To maintain workload availability despite node failures, you must ensure storage data replication through active or passive replication mechanisms.

4.12.1.1. Overview of HostPath Provisioner functionality

You can perform the following actions using HostPath Provisioner (HPP):

  • Map the host filesystem paths to storage classes for provisioning local storage.
  • Statically create storage classes to configure filesystem paths on a node for storage consumption.
  • Statically provision Persistent Volumes (PVs) based on the storage class.
  • Create workloads and PersistentVolumeClaims (PVCs) while being aware of the underlying storage topology.
Note

HPP is available in upstream Kubernetes. However, it is not recommended to use HPP from upstream Kubernetes.

4.12.1.2. Overview of Local Storage Operator functionality

You can perform the following actions using Local Storage Operator (LSO):

  • Assign the storage devices (disks or partitions) to the storage classes without modifying the device configuration.
  • Statically provision PVs and storage classes by configuring the LocalVolume custom resource (CR).
  • Create workloads and PVCs while being aware of the underlying storage topology.
Note

LSO is developed and delivered by Red Hat.

4.12.1.3. Overview of LVM Storage functionality

You can perform the following actions using Logical Volume Manager (LVM) Storage:

  • Configure storage devices (disks or partitions) as lvm2 volume groups and expose the volume groups as storage classes.
  • Create workloads and request storage by using PVCs without considering the node topology.

LVM Storage uses the TopoLVM CSI driver to dynamically allocate storage space to the nodes in the topology and provision PVs.

Note

LVM Storage is developed and maintained by Red Hat. The CSI driver provided with LVM Storage is the upstream project "topolvm".

4.12.1.4. Comparison of LVM Storage, LSO, and HPP

The following sections compare the functionalities provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage.

4.12.1.4.1. Comparison of the support for storage types and filesystems

The following table compares the support for storage types and filesystems provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage:

Table 4.1. Comparison of the support for storage types and filesystems
FunctionalityLVM StorageLSOHPP

Support for block storage

Yes

Yes

No

Support for file storage

Yes

Yes

Yes

Support for object storage [1]

No

No

No

Available filesystems

ext4, xfs

ext4, xfs

Any mounted system available on the node is supported.

  1. None of the solutions (LVM Storage, LSO, and HPP) provide support for object storage. Therefore, if you want to use object storage, you need an S3 object storage solution, such as MultiClusterGateway from the Red Hat OpenShift Data Foundation. All of the solutions can serve as underlying storage providers for the S3 object storage solutions.
4.12.1.4.2. Comparison of the support for core functionalities

The following table compares how LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) support core functionalities for provisioning local storage:

Table 4.2. Comparison of the support for core functionalities
FunctionalityLVM StorageLSOHPP

Support for automatic file system formatting

Yes

Yes

N/A

Support for dynamic provisioning

Yes

No

No

Support for using software Redundant Array of Independent Disks (RAID) arrays

Yes

Supported on 4.15 and later.

Yes

Yes

Support for transparent disk encryption

Yes

Supported on 4.16 and later.

Yes

Yes

Support for volume based disk encryption

No

No

No

Support for disconnected installation

Yes

Yes

Yes

Support for PVC expansion

Yes

No

No

Support for volume snapshots and volume clones

Yes

No

No

Support for thin provisioning

Yes

Devices are thin-provisioned by default.

Yes

You can configure the devices to point to the thin-provisioned volumes

Yes

You can configure a path to point to the thin-provisioned volumes.

Support for automatic disk discovery and setup

Yes

Automatic disk discovery is available during installation and runtime. You can also dynamically add the disks to the LVMCluster custom resource (CR) to increase the storage capacity of the existing storage classes.

Technology Preview

Automatic disk discovery is available during installation.

No

4.12.1.4.3. Comparison of performance and isolation capabilities

The following table compares the performance and isolation capabilities of LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) in provisioning local storage.

Table 4.3. Comparison of performance and isolation capabilities
FunctionalityLVM StorageLSOHPP

Performance

I/O speed is shared for all workloads that use the same storage class.

Block storage allows direct I/O operations.

Thin provisioning can affect the performance.

I/O depends on the LSO configuration.

Block storage allows direct I/O operations.

I/O speed is shared for all workloads that use the same storage class.

The restrictions imposed by the underlying filesystem can affect the I/O speed.

Isolation boundary [1]

LVM Logical Volume (LV)

It provides higher level of isolation compared to HPP.

LVM Logical Volume (LV)

It provides higher level of isolation compared to HPP

Filesystem path

It provides lower level of isolation compared to LSO and LVM Storage.

  1. Isolation boundary refers to the level of separation between different workloads or applications that use local storage resources.
4.12.1.4.4. Comparison of the support for additional functionalities

The following table compares the additional features provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage:

Table 4.4. Comparison of the support for additional functionalities
FunctionalityLVM StorageLSOHPP

Support for generic ephemeral volumes

Yes

No

No

Support for CSI inline ephemeral volumes

No

No

No

Support for storage topology

Yes

Supports CSI node topology

Yes

LSO provides partial support for storage topology through node tolerations.

No

Support for ReadWriteMany (RWX) access mode [1]

No

No

No

  1. All of the solutions (LVM Storage, LSO, and HPP) have the ReadWriteOnce (RWO) access mode. RWO access mode allows access from multiple pods on the same node.

4.12.2. Persistent storage using local volumes

OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface.

Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.

Note

Local volumes can only be used as a statically created persistent volume.

4.12.2.1. Installing the Local Storage Operator

The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster.

Prerequisites

  • Access to the OpenShift Container Platform web console or command-line interface (CLI).

Procedure

  1. Create the openshift-local-storage project:

    $ oc adm new-project openshift-local-storage
  2. Optional: Allow local storage creation on infrastructure nodes.

    You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring.

    You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes.

    To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command:

    $ oc annotate namespace openshift-local-storage openshift.io/node-selector=''
  3. Optional: Allow local storage to run on the management pool of CPUs in single-node deployment.

    Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the management pool. Perform this step on single-node installations that use management workload partitioning.

    To allow Local Storage Operator to run on the management CPU pool, run following commands:

    $ oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'

From the UI

To install the Local Storage Operator from the web console, follow these steps:

  1. Log in to the OpenShift Container Platform web console.
  2. Navigate to Operators OperatorHub.
  3. Type Local Storage into the filter box to locate the Local Storage Operator.
  4. Click Install.
  5. On the Install Operator page, select A specific namespace on the cluster. Select openshift-local-storage from the drop-down menu.
  6. Adjust the values for Update Channel and Approval Strategy to the values that you want.
  7. Click Install.

Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console.

From the CLI

  1. Install the Local Storage Operator from the CLI.

    1. Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml:

      Example openshift-local-storage.yaml

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: local-operator-group
        namespace: openshift-local-storage
      spec:
        targetNamespaces:
          - openshift-local-storage
      ---
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: local-storage-operator
        namespace: openshift-local-storage
      spec:
        channel: stable
        installPlanApproval: Automatic 1
        name: local-storage-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace

      1
      The user approval policy for an install plan.
  2. Create the Local Storage Operator object by entering the following command:

    $ oc apply -f openshift-local-storage.yaml

    At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

  3. Verify local storage installation by checking that all pods and the Local Storage Operator have been created:

    1. Check that all the required pods have been created:

      $ oc -n openshift-local-storage get pods

      Example output

      NAME                                      READY   STATUS    RESTARTS   AGE
      local-storage-operator-746bf599c9-vlt5t   1/1     Running   0          19m

    2. Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project:

      $ oc get csvs -n openshift-local-storage

      Example output

      NAME                                         DISPLAY         VERSION               REPLACES   PHASE
      local-storage-operator.4.2.26-202003230335   Local Storage   4.2.26-202003230335              Succeeded

After all checks have passed, the Local Storage Operator is installed successfully.

4.12.2.2. Provisioning local volumes by using the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Prerequisites

  • The Local Storage Operator is installed.
  • You have a local disk that meets the following conditions:

    • It is attached to a node.
    • It is not mounted.
    • It does not contain partitions.

Procedure

  1. Create the local volume resource. This resource must define the nodes and paths to the local volumes.

    Note

    Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs).

    Example: Filesystem

    apiVersion: "local.storage.openshift.io/v1"
    kind: "LocalVolume"
    metadata:
      name: "local-disks"
      namespace: "openshift-local-storage" 1
    spec:
      nodeSelector: 2
        nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-140-183
              - ip-10-0-158-139
              - ip-10-0-164-33
      storageClassDevices:
        - storageClassName: "local-sc" 3
          volumeMode: Filesystem 4
          fsType: xfs 5
          devicePaths: 6
            - /path/to/device 7

    1
    The namespace where the Local Storage Operator is installed.
    2
    Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3
    The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
    4
    The volume mode, either Filesystem or Block, that defines the type of local volumes.
    Note

    A raw block volume (volumeMode: Block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    5
    The file system that is created when the local volume is mounted for the first time.
    6
    The path containing a list of local storage devices to choose from.
    7
    Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as /dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
    Note

    If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

    Example: Block

    apiVersion: "local.storage.openshift.io/v1"
    kind: "LocalVolume"
    metadata:
      name: "local-disks"
      namespace: "openshift-local-storage" 1
    spec:
      nodeSelector: 2
        nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-136-143
              - ip-10-0-140-255
              - ip-10-0-144-180
      storageClassDevices:
        - storageClassName: "local-sc" 3
          volumeMode: Block 4
          devicePaths: 5
            - /path/to/device 6

    1
    The namespace where the Local Storage Operator is installed.
    2
    Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3
    The name of the storage class to use when creating persistent volume objects.
    4
    The volume mode, either Filesystem or Block, that defines the type of local volumes.
    5
    The path containing a list of local storage devices to choose from.
    6
    Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
    Note

    If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

  2. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created:

    $ oc create -f <local-volume>.yaml
  3. Verify that the provisioner was created and that the corresponding daemon sets were created:

    $ oc get all -n openshift-local-storage

    Example output

    NAME                                          READY   STATUS    RESTARTS   AGE
    pod/diskmaker-manager-9wzms                   1/1     Running   0          5m43s
    pod/diskmaker-manager-jgvjp                   1/1     Running   0          5m43s
    pod/diskmaker-manager-tbdsj                   1/1     Running   0          5m43s
    pod/local-storage-operator-7db4bd9f79-t6k87   1/1     Running   0          14m
    
    NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
    service/local-storage-operator-metrics   ClusterIP   172.30.135.36   <none>        8383/TCP,8686/TCP   14m
    
    NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/diskmaker-manager   3         3         3       3            3           <none>          5m43s
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/local-storage-operator   1/1     1            1           14m
    
    NAME                                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/local-storage-operator-7db4bd9f79   1         1         1       14m

    Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid.

  4. Verify that the persistent volumes were created:

    $ oc get pv

    Example output

    NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    local-pv-1cec77cf   100Gi      RWO            Delete           Available           local-sc                88m
    local-pv-2ef7cd2a   100Gi      RWO            Delete           Available           local-sc                82m
    local-pv-3fa1c73    100Gi      RWO            Delete           Available           local-sc                48m

Important

Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation.

4.12.2.3. Provisioning local volumes without the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Important

Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs.

Prerequisites

  • Local disks are attached to the OpenShift Container Platform nodes.

Procedure

  1. Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml, with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes.

    Note

    Do not use different storage class names for the same device. Doing so will create multiple PVs.

    example-pv-filesystem.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-pv-filesystem
    spec:
      capacity:
        storage: 100Gi
      volumeMode: Filesystem 1
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-sc 2
      local:
        path: /dev/xvdf 3
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - example-node

    1
    The volume mode, either Filesystem or Block, that defines the type of PVs.
    2
    The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs.
    3
    The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode.
    Note

    A raw block volume (volumeMode: block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    example-pv-block.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-pv-block
    spec:
      capacity:
        storage: 100Gi
      volumeMode: Block 1
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-sc 2
      local:
        path: /dev/xvdf 3
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - example-node

    1
    The volume mode, either Filesystem or Block, that defines the type of PVs.
    2
    The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs.
    3
    The path containing a list of local storage devices to choose from.
  2. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created:

    $ oc create -f <example-pv>.yaml
  3. Verify that the local PV was created:

    $ oc get pv

    Example output

    NAME                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                STORAGECLASS    REASON   AGE
    example-pv-filesystem   100Gi      RWO            Delete           Available                        local-sc            3m47s
    example-pv1             1Gi        RWO            Delete           Bound       local-storage/pvc1   local-sc            12h
    example-pv2             1Gi        RWO            Delete           Bound       local-storage/pvc2   local-sc            12h
    example-pv3             1Gi        RWO            Delete           Bound       local-storage/pvc3   local-sc            12h

4.12.2.4. Creating the local volume persistent volume claim

Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod.

Prerequisites

  • Persistent volumes have been created using the local volume provisioner.

Procedure

  1. Create the PVC using the corresponding storage class:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: local-pvc-name 1
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem 2
      resources:
        requests:
          storage: 100Gi 3
      storageClassName: local-sc 4
    1
    Name of the PVC.
    2
    The type of the PVC. Defaults to Filesystem.
    3
    The amount of storage available to the PVC.
    4
    Name of the storage class required by the claim.
  2. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created:

    $ oc create -f <local-pvc>.yaml

4.12.2.5. Attach the local claim

After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource.

Prerequisites

  • A persistent volume claim exists in the same namespace.

Procedure

  1. Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod:

    apiVersion: v1
    kind: Pod
    spec:
    # ...
      containers:
        volumeMounts:
        - name: local-disks 1
          mountPath: /data 2
      volumes:
      - name: local-disks
        persistentVolumeClaim:
          claimName: local-pvc-name 3
    # ...
    1
    The name of the volume to mount.
    2
    The path inside the pod where the volume is mounted. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    3
    The name of the existing persistent volume claim to use.
  2. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created:

    $ oc create -f <local-pod>.yaml

4.12.2.6. Automating discovery and provisioning for local storage devices

The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices.

Important

Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Important

Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment.

Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices.

Warning

Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet that target a node more than once is not supported.

Prerequisites

  • You have cluster administrator permissions.
  • You have installed the Local Storage Operator.
  • You have attached local disks to OpenShift Container Platform nodes.
  • You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI).

Procedure

  1. To enable automatic discovery of local devices from the web console:

    1. Click Operators Installed Operators.
    2. In the openshift-local-storage namespace, click Local Storage.
    3. Click the Local Volume Discovery tab.
    4. Click Create Local Volume Discovery and then select either Form view or YAML view.
    5. Configure the LocalVolumeDiscovery object parameters.
    6. Click Create.

      The Local Storage Operator creates a local volume discovery instance named auto-discover-devices.

  2. To display a continuous list of available devices on a node:

    1. Log in to the OpenShift Container Platform web console.
    2. Navigate to Compute Nodes.
    3. Click the node name that you want to open. The "Node Details" page is displayed.
    4. Select the Disks tab to display the list of the selected devices.

      The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode.

  3. To automatically provision local volumes for the discovered devices from the web console:

    1. Navigate to Operators Installed Operators and select Local Storage from the list of Operators.
    2. Select Local Volume Set Create Local Volume Set.
    3. Enter a volume set name and a storage class name.
    4. Choose All nodes or Select nodes to apply filters accordingly.

      Note

      Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes.

    5. Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create.

      A message displays after several minutes, indicating that the "Operator reconciled successfully."

  4. Alternatively, to provision local volumes for the discovered devices from the CLI:

    1. Create an object YAML file to define the local volume set, such as local-volume-set.yaml, as shown in the following example:

      apiVersion: local.storage.openshift.io/v1alpha1
      kind: LocalVolumeSet
      metadata:
        name: example-autodetect
      spec:
        nodeSelector:
          nodeSelectorTerms:
            - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                    - worker-0
                    - worker-1
        storageClassName: local-sc 1
        volumeMode: Filesystem
        fsType: ext4
        maxDeviceCount: 10
        deviceInclusionSpec:
          deviceTypes: 2
            - disk
            - part
          deviceMechanicalProperties:
            - NonRotational
          minSize: 10G
          maxSize: 100G
          models:
            - SAMSUNG
            - Crucial_CT525MX3
          vendors:
            - ATA
            - ST2000LM
      1
      Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
      2
      When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices.
    2. Create the local volume set object:

      $ oc apply -f local-volume-set.yaml
    3. Verify that the local persistent volumes were dynamically provisioned based on the storage class:

      $ oc get pv

      Example output

      NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
      local-pv-1cec77cf   100Gi      RWO            Delete           Available           local-sc                88m
      local-pv-2ef7cd2a   100Gi      RWO            Delete           Available           local-sc                82m
      local-pv-3fa1c73    100Gi      RWO            Delete           Available           local-sc                48m

Note

Results are deleted after they are removed from the node. Symlinks must be manually removed.

4.12.2.7. Using tolerations with Local Storage Operator pods

Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes.

You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node.

Important

Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect. An operator allows you to leave one of these parameters empty.

Prerequisites

  • The Local Storage Operator is installed.
  • Local disks are attached to OpenShift Container Platform nodes with a taint.
  • Tainted nodes are expected to provision local storage.

Procedure

To configure local volumes for scheduling on tainted nodes:

  1. Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example:

      apiVersion: "local.storage.openshift.io/v1"
      kind: "LocalVolume"
      metadata:
        name: "local-disks"
        namespace: "openshift-local-storage"
      spec:
        tolerations:
          - key: localstorage 1
            operator: Equal 2
            value: "localstorage" 3
        storageClassDevices:
            - storageClassName: "local-sc"
              volumeMode: Block 4
              devicePaths: 5
                - /dev/xvdg
    1
    Specify the key that you added to the node.
    2
    Specify the Equal operator to require the key/value parameters to match. If operator is Exists, the system checks that the key exists and ignores the value. If operator is Equal, then the key and value must match.
    3
    Specify the value local of the tainted node.
    4
    The volume mode, either Filesystem or Block, defining the type of the local volumes.
    5
    The path containing a list of local storage devices to choose from.
  2. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example:

    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists

The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints.

4.12.2.8. Local Storage Operator Metrics

OpenShift Container Platform provides the following metrics for the Local Storage Operator:

  • lso_discovery_disk_count: total number of discovered devices on each node
  • lso_lvset_provisioned_PV_count: total number of PVs created by LocalVolumeSet objects
  • lso_lvset_unmatched_disk_count: total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria
  • lso_lvset_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolumeSet object criteria
  • lso_lv_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolume object criteria
  • lso_lv_provisioned_PV_count: total number of provisioned PVs for LocalVolume

To use these metrics, be sure to:

  • Enable support for monitoring when installing the Local Storage Operator.
  • When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace.

For more information about metrics, see Managing metrics.

4.12.2.9. Deleting the Local Storage Operator resources

4.12.2.9.1. Removing a local volume or local volume set

Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed.

Note

The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource.

Prerequisites

  • The persistent volume must be in a Released or Available state.

    Warning

    Deleting a persistent volume that is still in use can result in data loss or corruption.

Procedure

  1. Edit the previously created local volume to remove any unwanted disks.

    1. Edit the cluster resource:

      $ oc edit localvolume <name> -n openshift-local-storage
    2. Navigate to the lines under devicePaths, and delete any representing unwanted disks.
  2. Delete any persistent volumes created.

    $ oc delete pv <pv-name>
  3. Delete directory and included symlinks on the node.

    Warning

    The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability.

    $ oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1
    1
    The name of the storage class used to create the local volumes.
4.12.2.9.2. Uninstalling the Local Storage Operator

To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project.

Warning

Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator’s removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources.

Prerequisites

  • Access to the OpenShift Container Platform web console.

Procedure

  1. Delete any local volume resources installed in the project, such as localvolume, localvolumeset, and localvolumediscovery:

    $ oc delete localvolume --all --all-namespaces
    $ oc delete localvolumeset --all --all-namespaces
    $ oc delete localvolumediscovery --all --all-namespaces
  2. Uninstall the Local Storage Operator from the web console.

    1. Log in to the OpenShift Container Platform web console.
    2. Navigate to Operators Installed Operators.
    3. Type Local Storage into the filter box to locate the Local Storage Operator.
    4. Click the Options menu kebab at the end of the Local Storage Operator.
    5. Click Uninstall Operator.
    6. Click Remove in the window that appears.
  3. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command:

    $ oc delete pv <pv-name>
  4. Delete the openshift-local-storage project:

    $ oc delete project openshift-local-storage

4.12.3. Persistent storage using hostPath

A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node’s filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.

Important

The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node.

4.12.3.1. Overview

OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster.

In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning.

A hostPath volume must be provisioned statically.

Important

Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host. The following example shows the / directory from the host being mounted into the container at /host.

apiVersion: v1
kind: Pod
metadata:
  name: test-host-mount
spec:
  containers:
  - image: registry.access.redhat.com/ubi9/ubi
    name: test-container
    command: ['sh', '-c', 'sleep 3600']
    volumeMounts:
    - mountPath: /host
      name: host-slash
  volumes:
   - name: host-slash
     hostPath:
       path: /
       type: ''

4.12.3.2. Statically provisioning hostPath volumes

A pod that uses a hostPath volume must be referenced by manual (static) provisioning.

Procedure

  1. Define the persistent volume (PV) by creating a pv.yaml file with the PersistentVolume object definition:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: task-pv-volume 1
      labels:
        type: local
    spec:
      storageClassName: manual 2
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce 3
      persistentVolumeReclaimPolicy: Retain
      hostPath:
        path: "/mnt/data" 4
    1
    The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods.
    2
    Used to bind persistent volume claim (PVC) requests to the PV.
    3
    The volume can be mounted as read-write by a single node.
    4
    The configuration file specifies that the volume is at /mnt/data on the cluster’s node. To avoid corrupting your host system, do not mount to the container root, /, or any path that is the same in the host and the container. You can safely mount the host by using /host
  2. Create the PV from the file:

    $ oc create -f pv.yaml
  3. Define the PVC by creating a pvc.yaml file with the PersistentVolumeClaim object definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: task-pvc-volume
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: manual
  4. Create the PVC from the file:

    $ oc create -f pvc.yaml

4.12.3.3. Mounting the hostPath share in a privileged pod

After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.

Prerequisites

  • A persistent volume claim exists that is mapped to the underlying hostPath share.

Procedure

  • Create a privileged pod that mounts the existing persistent volume claim:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-name 1
    spec:
      containers:
        ...
        securityContext:
          privileged: true 2
        volumeMounts:
        - mountPath: /data 3
          name: hostpath-privileged
      ...
      securityContext: {}
      volumes:
        - name: hostpath-privileged
          persistentVolumeClaim:
            claimName: task-pvc-volume 4
    1
    The name of the pod.
    2
    The pod must run as privileged to access the node’s storage.
    3
    The path to mount the host path share inside the privileged pod. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    4
    The name of the PersistentVolumeClaim object that has been previously created.

4.12.4. Persistent storage using Logical Volume Manager Storage

Logical Volume Manager (LVM) Storage uses Logical Volume Manager (LVM2) through the TopoLVM Container Storage Interface (CSI) driver to dynamically provision local storage on a cluster with limited resources.

You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.

4.12.4.1. Logical Volume Manager Storage installation

You can install Logical Volume Manager (LVM) Storage on a single-node OpenShift cluster and configure it to dynamically provision storage for your workloads.

You can deploy LVM Storage on single-node OpenShift clusters by using the OpenShift Container Platform CLI (oc), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM).

4.12.4.1.1. Prerequisites to install LVM Storage

The prerequisites to install LVM Storage are as follows:

  • Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.
  • Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
  • Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.

    Note

    You cannot wipe the disks that are in use.

  • If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. For more information, see "Installing LVM Storage by using RHACM".
4.12.4.1.2. Installing LVM Storage by using the CLI

As a cluster administrator, you can install Logical Volume Manager (LVM) Storage by using the OpenShift CLI (oc).

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to OpenShift Container Platform as a user with cluster-admin and Operator installation permissions.

Procedure

  1. Create a YAML file and add the configuration for creating a namespace.

    Example YAML configuration for creating a namespace

    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        openshift.io/cluster-monitoring: "true"
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/audit: privileged
        pod-security.kubernetes.io/warn: privileged
      name: openshift-storage

  2. Create the namespace by running the following command:

    $ oc create -f <file_name>
  3. Create an OperatorGroup custom resource (CR) YAML file.

    Example OperatorGroup CR

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-storage-operatorgroup
      namespace: openshift-storage
    spec:
      targetNamespaces:
      - openshift-storage

  4. Create the OperatorGroup CR by running the following command:

    $ oc create -f <file_name>
  5. Create a Subscription CR YAML file.

    Example Subscription CR

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: lvms
      namespace: openshift-storage
    spec:
      installPlanApproval: Automatic
      name: lvms-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace

  6. Create the Subscription CR by running the following command:

    $ oc create -f <file_name>

Verification

  1. To verify that LVM Storage is installed, run the following command:

    $ oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    Name                         Phase
    4.13.0-202301261535          Succeeded

4.12.4.1.3. Installing LVM Storage by using the web console

You can install Logical Volume Manager (LVM) Storage by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the single-node OpenShift cluster.
  • You have access to OpenShift Container Platform with cluster-admin and Operator installation permissions.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators OperatorHub.
  3. Click LVM Storage on the OperatorHub page.
  4. Set the following options on the Operator Installation page:

    1. Update Channel as stable-4.14.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If the openshift-storage namespace does not exist, it is created during the operator installation.
    4. Update approval as Automatic or Manual.

      Note

      If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically updates the running instance of LVM Storage without any intervention.

      If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must manually approve the update request to update LVM Storage to a newer version.

  5. Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
  6. Click Install.

Verification steps

  • Verify that LVM Storage shows a green tick, indicating successful installation.
4.12.4.1.4. Installing LVM Storage in a disconnected environment

You can install Logical Volume Manager (LVM) Storage on OpenShift Container Platform 4.14 in a disconnected environment. All sections referenced in this procedure are linked in the "Additional resources" section.

Prerequisites

  • You read the "About disconnected installation mirroring" section.
  • You have access to the OpenShift Container Platform image repository.
  • You created a mirror registry.

Procedure

  1. Follow the steps in the "Creating the image set configuration" procedure. To create an image set configuration for LVM Storage, you can use the following example ImageSetConfiguration object configuration:

    Example ImageSetConfiguration file for LVM Storage

    kind: ImageSetConfiguration
    apiVersion: mirror.openshift.io/v1alpha2
    archiveSize: 4 1
    storageConfig: 2
      registry:
        imageURL: example.com/mirror/oc-mirror-metadata 3
        skipTLS: false
    mirror:
      platform:
        channels:
        - name: stable-4.14 4
          type: ocp
        graph: true 5
      operators:
      - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 6
        packages:
        - name: lvms-operator 7
          channels:
          - name: stable 8
      additionalImages:
      - name: registry.redhat.io/ubi9/ubi:latest 9
      helm: {}

    1
    Set the maximum size (in gibibytes) of each file within the image set.
    2
    Specify the location in which you want to save the image set. This location can be a registry or a local directory.
    3
    Specify the storage URL for the image stream when using a registry. For more information, see "Why use imagestreams".
    4
    Specify the channel from which you want to retrieve the OpenShift Container Platform images.
    5
    Set this field to true to generate the OpenShift Update Service (OSUS) graph image. For more information, see "About the OpenShift Update Service".
    6
    Specify the Operator catalog from which you want to retrieve the OpenShift Container Platform images.
    7
    Specify the Operator packages to include in the image set. If this field is empty, all packages in the catalog are retrieved.
    8
    Specify the channels of the Operator packages to include in the image set. You must include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: $ oc mirror list operators --catalog=<catalog_name> --package=<package_name>.
    9
    Specify any additional images to include in the image set.
  2. Follow the procedure in the "Mirroring an image set to a mirror registry" section.
  3. Follow the procedure in the "Configuring image registry repository mirroring" section.
4.12.4.1.5. Installing LVM Storage by using RHACM

To install Logical Volume Manager (LVM) Storage on the clusters by using Red Hat Advanced Cluster Management (RHACM), you must create a Policy custom resource (CR). You can also configure the criteria to select the clusters on which you want to install LVM Storage.

Note

The Policy CR that is created to install LVM Storage is also applied to the clusters that are imported or created after creating the Policy CR.

Prerequisites

  • You have access to the RHACM cluster using an account with cluster-admin and Operator installation permissions.
  • You have dedicated disks that LVM Storage can use on each cluster.
  • The cluster must be managed by RHACM.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Create a namespace by running the following command:

    $ oc create ns <namespace>
  3. Create a Policy CR YAML file.

    Example Policy CR to install and configure LVM Storage

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-install-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector: 1
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-install-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-install-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: install-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: install-lvms
    spec:
      disabled: false
      remediationAction: enforce
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: install-lvms
          spec:
            object-templates:
            - complianceType: musthave
              objectDefinition: 2
                apiVersion: v1
                kind: Namespace
                metadata:
                  labels:
                    openshift.io/cluster-monitoring: "true"
                    pod-security.kubernetes.io/enforce: privileged
                    pod-security.kubernetes.io/audit: privileged
                    pod-security.kubernetes.io/warn: privileged
                  name: openshift-storage
            - complianceType: musthave
              objectDefinition: 3
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: musthave
              objectDefinition: 4
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms
                  namespace: openshift-storage
                spec:
                  installPlanApproval: Automatic
                  name: lvms-operator
                  source: redhat-operators
                  sourceNamespace: openshift-marketplace
            remediationAction: enforce
            severity: low

    1
    Set the key field and values field in PlacementRule.spec.clusterSelector to match the labels that are configured in the clusters on which you want to install LVM Storage.
    2
    The namespace configuration.
    3
    The OperatorGroup CR configuration.
    4
    The Subscription CR configuration.
  4. Create the Policy CR by running the following command:

    $ oc create -f <file_name> -n <namespace>

    Upon creating the Policy CR, the following custom resources are created on the clusters that match the selection criteria configured in the PlacementRule CR:

    • Namespace
    • OperatorGroup
    • Subscription

4.12.4.2. Limitations to configure the size of the devices used in LVM Storage

The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:

  • The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
  • The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).

    • You can define the size of PE and LE during the physical and logical device creation.
    • The default PE and LE size is 4 MB.
    • If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
Table 4.5. Size limits for different architectures using the default PE and LE size
ArchitectureRHEL 6RHEL 7RHEL 8RHEL 9

32-bit

16 TB

-

-

-

64-bit

8 EB [1]

100 TB [2]

8 EB [1]

500 TB [2]

8 EB

8 EB

  1. Theoretical size.
  2. Tested size.

4.12.4.3. About the LVMCluster custom resource

You can configure the LVMCluster custom resource (CR) to perform the following actions:

  • Create LVM volume groups that you can use to provision persistent volume claims (PVCs).
  • Configure a list of devices that you want to add to the LVM volume groups.
  • Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group.

After you have installed LVM Storage, you must create an LVMCluster custom resource (CR).

Example LVMCluster CR YAML file

apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
  name: my-lvmcluster
spec:
  tolerations:
  - effect: NoSchedule
    key: xyz
    operator: Equal
    value: "true"
  storage:
    deviceClasses:
    - name: vg1
      fstype: ext4 1
      default: true
      nodeSelector: 2
        nodeSelectorTerms:
        - matchExpressions:
          - key: mykey
            operator: In
            values:
            - ssd
      deviceSelector: 3
        paths:
        - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
        - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
        optionalPaths:
        - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
        - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
      thinPoolConfig:
        name: thin-pool-1
        sizePercent: 90 4
        overprovisionRatio: 10

1 2 3 4
Optional field
Explanation of fields in the LVMCluster CR

The LVMCluster CR fields are described in the following table:

Table 4.6. LVMCluster CR fields
FieldTypeDescription

spec.storage.deviceClasses

array

Contains the configuration to assign the local storage devices to the LVM volume groups.

LVM Storage creates a storage class and volume snapshot class for each device class that you create.

If you add or remove a device class, the update reflects in the cluster only after deleting and recreating the topolvm-node pod.

deviceClasses.name

string

Specify a name for the LVM volume group (VG).

deviceClasses.fstype

string

Set this field to ext4 or xfs. By default, this field is set to xfs.

deviceClasses.default

boolean

Set this field to true to indicate that a device class is the default. Otherwise, you can set it to false. You can only configure a single default device class.

deviceClasses.nodeSelector

object

Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.

On the control-plane node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster.

nodeSelector.nodeSelectorTerms

array

Configure the requirements that are used to select the node.

deviceClasses.deviceSelector

object

Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.

For more information, see "About adding devices to a volume group".

deviceSelector.paths

array

Specify the device paths.

If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state.

deviceSelector.optionalPaths

array

Specify the optional device paths.

If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error.

deviceClasses.thinPoolConfig

object

Contains the configuration to create a thin pool in the LVM volume group.

thinPoolConfig.name

string

Specify a name for the thin pool.

thinPoolConfig.sizePercent

integer

Specify the percentage of space in the LVM volume group for creating the thin pool.

By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90.

thinPoolConfig.overprovisionRatio

integer

Specify a factor by which you can provision additional storage based on the available storage in the thin pool.

For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool.

To disable over-provisioning, set this field to 1.

4.12.4.3.1. About adding devices to a volume group

The deviceSelector field in the LVMCluster custom resource (CR) contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.

You can specify the device paths in the deviceSelector.paths field, the deviceSelector.optionalPaths field, or both. If you do not specify the device paths in both the deviceSelector.paths field and the deviceSelector.optionalPaths field, LVM Storage adds the unused devices to the LVM volume group.

If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available.

LVM Storage adds the devices to the LVM volume group only if the device path exists.

Important

After a device is added to the LVM volume group, it cannot be removed.

4.12.4.4. Ways to create an LVMCluster custom resource

You can create an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also create an LVMCluster CR by using RHACM.

Upon creating the LVMCluster CR, LVM Storage creates the following system-managed CRs:

  • A storageClass and volumeSnapshotClass for each device class.

    Note

    LVM Storage configures the name of the storage class and volume snapshot class in the format lvms-<device_class_name>, where, <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1, the name of the storage class and volume snapshot class is lvms-vg1.

  • LVMVolumeGroup: This CR is a specific type of persistent volume (PV) that is backed by an LVM volume group. It tracks the individual volume groups across multiple nodes.
  • LVMVolumeGroupNodeStatus: This CR tracks the status of the volume groups on a node.
4.12.4.4.1. Creating an LVMCluster CR by using the CLI

You can create an LVMCluster custom resource (CR) on a worker node using the OpenShift CLI (oc).

Important

You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to OpenShift Container Platform as a user with cluster-admin privileges.
  • You have installed LVM Storage.
  • You have installed a worker node in the cluster.
  • You read the "About the LVMCluster custom resource" section.

Procedure

  1. Create an LVMCluster custom resource (CR) YAML file:

    Example LVMCluster CR YAML file

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
      namespace: openshift-storage
    spec:
    # ...
      storage:
        deviceClasses: 1
    # ...
          nodeSelector: 2
    # ...
          deviceSelector: 3
    # ...
          thinPoolConfig: 4
    # ...

    1
    Contains the configuration to assign the local storage devices to the LVM volume groups.
    2
    Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
    3
    Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
    4
    Contains the configuration to create a thin pool in the LVM volume group.
  2. Create the LVMCluster CR by running the following command:

    $ oc create -f <file_name>

    Example output

    lvmcluster/lvmcluster created

Verification

  1. Check that the LVMCluster CR is in the Ready state:

    $ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status.state}' -n <namespace>

    Example output

    {"deviceClassStatuses": 1
    [
      {
        "name": "vg1",
        "nodeStatus": [ 2
            {
                "devices": [ 3
                    "/dev/nvme0n1",
                    "/dev/nvme1n1",
                    "/dev/nvme2n1"
                ],
                "node": "kube-node", 4
                "status": "Ready" 5
            }
        ]
      }
    ]
    "state":"Ready"} 6

    1
    The status of the device class.
    2
    The status of the LVM volume group on each node.
    3
    The list of devices used to create the LVM volume group.
    4
    The node on which the device class is created.
    5
    The status of the LVM volume group on the node.
    6
    The status of the LVMCluster CR.
    Note

    If the LVMCluster CR is in the Failed state, you can view the reason for failure in the status field.

    Example status field with the reason for failure:

    status:
      deviceClassStatuses:
        - name: vg1
          nodeStatus:
            - node: my-node-1.example.com
              reason: no available devices found for volume group
              status: Failed
      state: Failed
  2. Optional: To view the storage classes created by LVM Storage for each device class, run the following command:

    $ oc get storageclass

    Example output

    NAME          PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    lvms-vg1      topolvm.io           Delete          WaitForFirstConsumer   true                   31m

  3. Optional: To view the volume snapshot classes created by LVM Storage for each device class, run the following command:

    $ oc get volumesnapshotclass

    Example output

    NAME          DRIVER               DELETIONPOLICY   AGE
    lvms-vg1      topolvm.io           Delete           24h

4.12.4.4.2. Creating an LVMCluster CR by using the web console

You can create an LVMCluster CR on a worker node using the OpenShift Container Platform web console.

Important

You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.

Prerequisites

  • You have access to the OpenShift Container Platform cluster with cluster-admin privileges.
  • You have installed LVM Storage.
  • You have installed a worker node in the cluster.
  • You read the "About the LVMCluster custom resource" section.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. In the openshift-storage namespace, click LVM Storage.
  4. Click Create LVMCluster and select either Form view or YAML view.
  5. Configure the required LVMCluster CR parameters.
  6. Click Create.
  7. Optional: If you want to edit the LVMCLuster CR, perform the following actions:

    1. Click the LVMCluster tab.
    2. From the Actions menu, select Edit LVMCluster.
    3. Click YAML and edit the required LVMCLuster CR parameters.
    4. Click Save.

Verification

  1. On the LVMCLuster page, check that the LVMCluster CR is in the Ready state.
  2. Optional: To view the available storage classes created by LVM Storage for each device class, click Storage StorageClasses.
  3. Optional: To view the available volume snapshot classes created by LVM Storage for each device class, click Storage VolumeSnapshotClasses.
4.12.4.4.3. Creating an LVMCluster CR by using RHACM

After you have installed Logical Volume Manager (LVM) Storage by using RHACM, you must create an LVMCluster custom resource (CR).

Prerequisites

  • You have installed LVM Storage by using RHACM.
  • You have access to the RHACM cluster using an account with cluster-admin permissions.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Create a ConfigurationPolicy CR YAML file with the configuration to create an LVMCluster CR.

    Example ConfigurationPolicy CR YAML file to create an LVMCluster CR

    apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: lvms
      namespace: openshift-storage
    spec:
      object-templates:
      - complianceType: musthave
        objectDefinition:
          apiVersion: lvm.topolvm.io/v1alpha1
          kind: LVMCluster
          metadata:
            name: my-lvmcluster
            namespace: openshift-storage
          spec:
            storage:
              deviceClasses: 1
    # ...
                deviceSelector: 2
    # ...
                thinPoolConfig: 3
    # ...
                nodeSelector: 4
    # ...
      remediationAction: enforce
      severity: low

    1
    Contains the configuration to assign the local storage devices to the LVM volume groups.
    2
    Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
    3
    Contains the configuration to create a thin pool in the LVM volume group.
    4
    Contains the configuration to choose the nodes on which you want to create the LVM volume groups. If this field is empty, then all nodes without no-schedule taints are considered.
  3. Create the ConfigurationPolicy CR by running the following command:

    $ oc create -f <file_name> -n <cluster_namespace> 1
    1
    Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.

4.12.4.5. Ways to delete an LVMCluster custom resource

You can delete an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also delete an LVMCluster CR by using RHACM.

Upon deleting the LVMCluster CR, LVM Storage deletes the following CRs:

  • storageClass
  • volumeSnapshotClass
  • LVMVolumeGroup
  • LVMVolumeGroupNodeStatus
4.12.4.5.1. Deleting an LVMCluster CR by using the CLI

You can delete the LVMCluster custom resource (CR) using the OpenShift CLI (oc).

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the LVMCluster CR by running the following command:

    $ oc delete lvmcluster <lvmclustername> -n openshift-storage

Verification

  • To verify that the LVMCluster CR has been deleted, run the following command:

    $ oc get lvmcluster -n <namespace>

    Example output

    No resources found in openshift-storage namespace.

4.12.4.5.2. Deleting an LVMCluster CR by using the web console

You can delete the LVMCluster custom resource (CR) using the OpenShift Container Platform web console.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators to view all the installed Operators.
  3. Click LVM Storage in the openshift-storage namespace.
  4. Click the LVMCluster tab.
  5. From the Actions, select Delete LVMCluster.
  6. Click Delete.

Verification

  • On the LVMCLuster page, check that the LVMCluster CR has been deleted.
4.12.4.5.3. Deleting an LVMCluster CR by using RHACM

If you have installed Logical Volume Manager (LVM) Storage by using Red Hat Advanced Cluster Management (RHACM), you can delete an LVMCluster custom resource (CR) by using RHACM.

Prerequisites

  • You have access to the RHACM cluster as a user with cluster-admin permissions.
  • You have deleted the following resources provisioned by LVM Storage:

    • Persistent volume claims (PVCs)
    • Volume snapshots
    • Volume clones

      You have also deleted any applications that are using these resources.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Delete the ConfigurationPolicy CR for the LVMCluster CR by running the following command:

    $ oc delete -f <file_name> -n <cluster_namespace> 1
    1
    Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.
  3. Create a Policy CR YAML file to delete the LVMCluster CR.

    Example Policy CR to delete the LVMCluster CR

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-lvmcluster-delete
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    spec:
      remediationAction: enforce
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-lvmcluster-removal
            spec:
              remediationAction: enforce 1
              severity: low
              object-templates:
                - complianceType: mustnothave
                  objectDefinition:
                    kind: LVMCluster
                    apiVersion: lvm.topolvm.io/v1alpha1
                    metadata:
                      name: my-lvmcluster
                      namespace: openshift-storage 2
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-lvmcluster-delete
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-policy-lvmcluster-delete
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: policy-lvmcluster-delete
    ---
    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-policy-lvmcluster-delete
    spec:
      clusterConditions:
        - status: "True"
          type: ManagedClusterConditionAvailable
      clusterSelector: 3
        matchExpressions:
          - key: mykey
            operator: In
            values:
              - myvalue

    1
    The spec.remediationAction in policy-template is overridden by the preceding parameter value for spec.remediationAction.
    2
    This namespace field must have the openshift-storage value.
    3
    Configure the requirements to select the clusters. LVM Storage is uninstalled on the clusters that match the selection criteria.
  4. Create the Policy CR by running the following command:

    $ oc create -f <file_name> -n <namespace>
  5. Create a Policy CR YAML file to check if the LVMCluster CR has been deleted.

    Example Policy CR to check if the LVMCluster CR has been deleted

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-lvmcluster-inform
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    spec:
      remediationAction: inform
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-lvmcluster-removal-inform
            spec:
              remediationAction: inform 1
              severity: low
              object-templates:
                - complianceType: mustnothave
                  objectDefinition:
                    kind: LVMCluster
                    apiVersion: lvm.topolvm.io/v1alpha1
                    metadata:
                      name: my-lvmcluster
                      namespace: openshift-storage 2
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-lvmcluster-check
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-policy-lvmcluster-check
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: policy-lvmcluster-inform
    ---
    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-policy-lvmcluster-check
    spec:
      clusterConditions:
        - status: "True"
          type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
          - key: mykey
            operator: In
            values:
              - myvalue

    1
    The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction.
    2
    The namespace field must have the openshift-storage value.
  6. Create the Policy CR by running the following command:

    $ oc create -f <file_name> -n <namespace>

Verification

  • Check the status of the Policy CRs by running the following command:

    $ oc get policy -n <namespace>

    Example output

    NAME                       REMEDIATION ACTION   COMPLIANCE STATE   AGE
    policy-lvmcluster-delete   enforce              Compliant          15m
    policy-lvmcluster-inform   inform               Compliant          15m

    Important

    The Policy CRs must be in Compliant state.

4.12.4.6. Provisioning storage

After you have created the LVM volume groups using the LVMCluster custom resource (CR), you can provision the storage by creating persistent volume claims (PVCs).

To create a PVC, you must create a PersistentVolumeClaim object.

Prerequisites

  • You have created an LVMCluster CR.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a PersistentVolumeClaim object similar to the following:

    Example PersistentVolumeClaim object

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-block-1 1
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Block 2
      resources:
        requests:
          storage: 10Gi 3
      storageClassName: lvms-vg1 4

    1
    Specify a name for the PVC.
    2
    To create a block PVC, set this field to Block. To create a file PVC, set this field to Filesystem.
    3
    Specify the storage size. Logical Volume Manager (LVM) Storage provisions PVCs in units of 1 GiB (gibibytes). The requested storage is rounded up to the nearest GiB. The total storage size you can provision is limited by the size of the LVM thin pool and the overprovisioning factor.
    4
    The value of the storageClassName field must be in the format lvms-<device_class_name> where <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1, you must set the storageClassName field to lvms-vg1.
    Note

    The volumeBindingMode field of the storage class is set to WaitForFirstConsumer.

  3. Create the PVC by running the following command:

    $ oc create -f <file_name> -n <application_namespace>
    Note

    The created PVCs remain in Pending state until you deploy the workloads that use them.

Verification

  • To verify that the PVC is created, run the following command:

    $ oc get pvc -n <namespace>

    Example output

    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-block-1   Bound    pvc-e90169a8-fd71-4eea-93b8-817155f60e47   1Gi        RWO            lvms-vg1       5s

4.12.4.7. Ways to scale up the storage of a single-node OpenShift cluster

You can scale up the storage of a single-node OpenShift cluster by adding new devices to the existing node.

To add a new device to the existing node on a single-node OpenShift cluster, you must add the path to the new device in the deviceSelector field of the LVMCluster custom resource (CR).

Important

You can add the deviceSelector field in the LVMCluster CR only while creating the LVMCluster CR. If you have not added the deviceSelector field while creating the LVMCluster CR, you must delete the LVMCluster CR and create a new LVMCluster CR containing the deviceSelector field.

If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available.

4.12.4.7.1. Scaling up the storage of a single-node OpenShift cluster by using the CLI

You can scale up the storage capacity of the existing node on a single-node OpenShift cluster by using the OpenShift CLI (oc).

Prerequisites

  • You have additional unused devices on the single-node OpenShift cluster to be used by Logical Volume Manager (LVM) Storage.
  • You have installed the OpenShift CLI (oc).
  • You have created an LVMCluster custom resource (CR).

Procedure

  1. Edit the LVMCluster CR by running the following command:

    $ oc edit <lvmcluster_file_name> -n <namespace>
  2. Add the path to the new device in the deviceSelector field:

    Example LVMCluster CR

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
      storage:
        deviceClasses:
    # ...
          deviceSelector: 1
            paths: 2
            - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
            optionalPaths: 3
            - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
    # ...

    1
    Contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths, LVM Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the device path exists.
    2
    Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state.
    3
    Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error.
    Important

    After a device is added to the LVM volume group, it cannot be removed.

  3. Save the LVMCluster CR.
4.12.4.7.2. Scaling up the storage of a single-node OpenShift cluster by using the web console

You can scale up the storage capacity of the existing node on a single-node OpenShift cluster by using the OpenShift Container Platform web console.

Prerequisites

  • You have additional unused devices on the single-node OpenShift cluster to be used by Logical Volume Manager (LVM) Storage.
  • You have created an LVMCluster custom resource (CR).

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. Click LVM Storage in the openshift-storage namespace.
  4. Click the LVMCluster tab to view the LVMCluster CR created on the cluster.
  5. From the Actions menu, select Edit LVMCluster.
  6. Click the YAML tab.
  7. Edit the LVMCluster CR to add the new device path in the deviceSelector field:

    Example LVMCluster CR

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
      storage:
        deviceClasses:
    # ...
          deviceSelector: 1
            paths: 2
            - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
            optionalPaths: 3
            - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
    # ...

    1
    Contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths, LVM Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the device path exists.
    2
    Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state.
    3
    Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error.
    Important

    After a device is added to the LVM volume group, it cannot be removed.

  8. Click Save.
4.12.4.7.3. Scaling up the storage of single-node OpenShift clusters by using RHACM

You can scale up the storage capacity of the existing node on single-node OpenShift clusters by using RHACM.

Prerequisites

  • You have access to the RHACM cluster using an account with cluster-admin privileges.
  • You have created an LVMCluster custom resource (CR) by using RHACM.
  • You have additional unused devices on each single-node OpenShift cluster to be used by Logical Volume Manager (LVM) Storage.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Edit the LVMCluster CR that you created using RHACM by running the following command:

    $ oc edit -f <file_name> -ns <namespace> 1
    1
    Replace <file_name> with the name of the LVMCluster CR.
  3. In the LVMCluster CR, add the path to the new device in the deviceSelector field.

    Example LVMCluster CR:

    apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: lvms
          spec:
            object-templates:
               - complianceType: musthave
                 objectDefinition:
                   apiVersion: lvm.topolvm.io/v1alpha1
                   kind: LVMCluster
                   metadata:
                     name: my-lvmcluster
                     namespace: openshift-storage
                   spec:
                     storage:
                       deviceClasses:
    # ...
                         deviceSelector: 1
                           paths: 2
                           - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
                           optionalPaths: 3
                           - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
    # ...

    1
    Contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths, LVM Storage adds the unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the device path exists.
    2
    Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state.
    3
    Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error.
    Important

    After a device is added to the LVM volume group, it cannot be removed.

  4. Save the LVMCluster CR.

4.12.4.8. Expanding a persistent volume claim

After scaling up the storage of a cluster, you can expand the existing persistent volume claims (PVCs).

To expand a PVC, you must update the requests.storage field in the PVC.

Prerequisites

  • Dynamic provisioning is used.
  • The StorageClass object associated with the PVC has the allowVolumeExpansion field set to true.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Update the value of the spec.resources.requests.storage field to a value that is greater than the current value by running the following command:

    $ oc patch pvc <pvc_name> -n <application_namespace> -p \1
    '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}' --type=merge 2
    1
    Replace <pvc_name> with the name of the PVC that you want to expand.
    2
    Replace <desired_size> with the new size to expand the PVC.

Verification

  • To verify that resizing is completed, run the following command:

    $ oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}

    Logical Volume Manager (LVM) Storage adds the Resizing condition to the PVC during expansion. It deletes the Resizing condition after the PVC expansion.

4.12.4.9. Deleting a persistent volume claim

You can delete a persistent volume claim (PVC) by using the OpenShift CLI (oc).

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the PVC by running the following command:

    $ oc delete pvc <pvc_name> -n <namespace>

Verification

  • To verify that the PVC is deleted, run the following command:

    $ oc get pvc -n <namespace>

    The deleted PVC must not be present in the output of this command.

4.12.4.10. About volume snapshots

You can create snapshots of persistent volume claims (PVCs) that are provisioned by LVM Storage.

You can perform the following actions using the volume snapshots:

  • Back up your application data.

    Important

    Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you must move the snapshots to a secure location. You can use OpenShift API for Data Protection (OADP) backup and restore solutions. For information on OADP, see "OADP features".

  • Revert to a state at which the volume snapshot was taken.
Note

You can also create volume snapshots of volume clones.

Additional resources

4.12.4.10.1. Creating volume snapshots

You can create volume snapshots based on the available capacity of the thin pool and the over-provisioning limits. To create a volume snapshot, you must create a VolumeSnapshot object.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot.
  • You stopped all the I/O to the PVC.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a VolumeSnapshot object:

    Example VolumeSnapshot object

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: lvm-block-1-snap 1
    spec:
      source:
        persistentVolumeClaimName: lvm-block-1 2
      volumeSnapshotClassName: lvms-vg1 3

    1
    Specify a name for the volume snapshot.
    2
    Specify the name of the source PVC. LVM Storage creates a snapshot of this PVC.
    3
    Set this field to the name of a volume snapshot class.
    Note

    To get the list of available volume snapshot classes, run the following command:

    $ oc get volumesnapshotclass
  3. Create the volume snapshot in the namespace where you created the source PVC by running the following command:

    $ oc create -f <file_name> -n <namespace>

    LVM Storage creates a read-only copy of the PVC as a volume snapshot.

Verification

  • To verify that the volume snapshot is created, run the following command:

    $ oc get volumesnapshot -n <namespace>

    Example output

    NAME               READYTOUSE   SOURCEPVC     SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                                    CREATIONTIME   AGE
    lvm-block-1-snap   true         lvms-test-1                           1Gi           lvms-vg1        snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91   19s            19s

    The value of the READYTOUSE field for the volume snapshot that you created must be true.

4.12.4.10.2. Restoring volume snapshots

To restore a volume snapshot, you must create a persistent volume claim (PVC) with the dataSource.name field set to the name of the volume snapshot.

The restored PVC is independent of the volume snapshot and the source PVC.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have created a volume snapshot.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a PersistentVolumeClaim object with the configuration to restore the volume snapshot:

    Example PersistentVolumeClaim object to restore a volume snapshot

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: lvm-block-1-restore
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Block
      Resources:
        Requests:
          storage: 2Gi 1
      storageClassName: lvms-vg1 2
      dataSource:
        name: lvm-block-1-snap 3
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io

    1
    Specify the storage size of the PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot.
    2
    Set this field to the value of the storageClassName field in the source PVC of the volume snapshot that you want to restore.
    3
    Set this field to the name of the volume snapshot that you want to restore.
  3. Create the PVC in the namespace where you created the volume snapshot by running the following command:

    $ oc create -f <file_name> -n <namespace>

Verification

  • To verify that the volume snapshot is restored, create a workload using the restored PVC and then run the following command:

    $ oc get pvc -n <namespace>

    Example output

    NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-block-1-restore   Bound    pvc-e90169a8-fd71-4eea-93b8-817155f60e47   1Gi        RWO            lvms-vg1       5s

4.12.4.10.3. Deleting volume snapshots

You can delete the volume snapshots of the persistent volume claims (PVCs).

Important

When you delete a persistent volume claim (PVC), LVM Storage deletes only the PVC, but not the snapshots of the PVC.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have ensured that the volume snpashot that you want to delete is not in use.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the volume snapshot by running the following command:

    $ oc delete volumesnapshot <volume_snapshot_name> -n <namespace>

Verification

  • To verify that the volume snapshot is deleted, run the following command:

    $ oc get volumesnapshot -n <namespace>

    The deleted volume snapshot must not be present in the output of this command.

4.12.4.11. About volume clones

A volume clone is a duplicate of an existing persistent volume claim (PVC). You can create a volume clone to make a point-in-time copy of the data.

4.12.4.11.1. Creating volume clones

To create a clone of a persistent volume claim (PVC), you must create a PersistentVolumeClaim object in the namespace where you created the source PVC.

Important

The cloned PVC has write access.

Prerequisites

  • You ensured that the source PVC is in Bound state. This is required for a consistent clone.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a PersistentVolumeClaim object:

    Example PersistentVolumeClaim object to create a volume clone

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: lvm-pvc-clone
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: lvms-vg1 1
      volumeMode: Filesystem 2
      dataSource:
        kind: PersistentVolumeClaim
        name: lvm-pvc 3
      resources:
        requests:
          storage: 1Gi 4

    1
    Set this field to the value of the storageClassName field in the source PVC.
    2
    Set this field to the volumeMode field in the source PVC.
    3
    Specify the name of the source PVC.
    4
    Specify the storage size for the cloned PVC. The storage size of the cloned PVC must be greater than or equal to the storage size of the source PVC.
  3. Create the PVC in the namespace where you created the source PVC by running the following command:

    $ oc create -f <file_name> -n <namespace>

Verification

  • To verify that the volume clone is created, create a workload using the cloned PVC and then run the following command:

    $ oc get pvc -n <namespace>

    Example output

    NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-block-1-clone   Bound    pvc-e90169a8-fd71-4eea-93b8-817155f60e47   1Gi        RWO            lvms-vg1       5s

4.12.4.11.2. Deleting volume clones

You can delete volume clones.

Important

When you delete a persistent volume claim (PVC), LVM Storage deletes only the source persistent volume claim (PVC) but not the clones of the PVC.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the cloned PVC by running the following command:

    # oc delete pvc <clone_pvc_name> -n <namespace>

Verification

  • To verify that the volume clone is deleted, run the following command:

    $ oc get pvc -n <namespace>

    The deleted volume clone must not be present in the output of this command.

4.12.4.12. Updating LVM Storage on a single-node OpenShift cluster

You can update LVM Storage to ensure compatibility with the single-node OpenShift version.

Prerequisites

  • You have updated your single-node OpenShift cluster.
  • You have installed a previous version of LVM Storage.
  • You have installed the OpenShift CLI (oc).
  • You have access to the cluster using an account with cluster-admin permissions.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Update the Subscription custom resource (CR) that you created while installing LVM Storage by running the following command:

    $ oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{"spec":{"channel":"<update_channel>"}}' 1
    1
    Replace <update_channel> with the version of LVM Storage that you want to install. For example, stable-4.14.
  3. View the update events to check that the installation is complete by running the following command:

    $ oc get events -n openshift-storage

    Example output

    ...
    8m13s       Normal    RequirementsUnknown   clusterserviceversion/lvms-operator.v4.14   requirements not yet checked
    8m11s       Normal    RequirementsNotMet    clusterserviceversion/lvms-operator.v4.14   one or more requirements couldn't be found
    7m50s       Normal    AllRequirementsMet    clusterserviceversion/lvms-operator.v4.14   all requirements found, attempting install
    7m50s       Normal    InstallSucceeded      clusterserviceversion/lvms-operator.v4.14   waiting for install components to report healthy
    7m49s       Normal    InstallWaiting        clusterserviceversion/lvms-operator.v4.14   installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" waiting for 1 outdated replica(s) to be terminated
    7m39s       Normal    InstallSucceeded      clusterserviceversion/lvms-operator.v4.14   install strategy completed with no errors
    ...

Verification

  • Verify the LVM Storage version by running the following command:

    $ oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'

    Example output

    lvms-operator.v4.14

4.12.4.13. Monitoring LVM Storage

To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:

openshift.io/cluster-monitoring=true
Important

For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics.

4.12.4.13.1. Metrics

You can monitor LVM Storage by viewing the metrics.

The following table describes the topolvm metrics:

Table 4.7. topolvm metrics
AlertDescription

topolvm_thinpool_data_percent

Indicates the percentage of data space used in the LVM thinpool.

topolvm_thinpool_metadata_percent

Indicates the percentage of metadata space used in the LVM thinpool.

topolvm_thinpool_size_bytes

Indicates the size of the LVM thin pool in bytes.

topolvm_volumegroup_available_bytes

Indicates the available space in the LVM volume group in bytes.

topolvm_volumegroup_size_bytes

Indicates the size of the LVM volume group in bytes.

topolvm_thinpool_overprovisioned_available

Indicates the available over-provisioned size of the LVM thin pool in bytes.

Note

Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.

4.12.4.13.2. Alerts

When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.

LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:

Table 4.8. LVM Storage alerts
AlertDescription

VolumeGroupUsageAtThresholdNearFull

This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required.

VolumeGroupUsageAtThresholdCritical

This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required.

ThinPoolDataUsageAtThresholdNearFull

This alert is triggered when the thin pool data usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolDataUsageAtThresholdCritical

This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdNearFull

This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdCritical

This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

4.12.4.14. Uninstalling LVM Storage by using the CLI

You can uninstall LVM Storage by using the OpenShift CLI (oc).

Prerequisites

  • You have logged in to oc as a user with cluster-admin permissions.
  • You deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
  • You deleted the LVMCluster custom resource (CR).

Procedure

  1. Get the currentCSV value for the LVM Storage Operator by running the following command:

    $ oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV

    Example output

    currentCSV: lvms-operator.v4.15.3

  2. Delete the subscription by running the following command:

    $ oc delete subscription.operators.coreos.com lvms-operator -n <namespace>

    Example output

    subscription.operators.coreos.com "lvms-operator" deleted

  3. Delete the CSV for the LVM Storage Operator in the target namespace by running the following command:

    $ oc delete clusterserviceversion <currentCSV> -n <namespace> 1
    1
    Replace <currentCSV> with the currentCSV value for the LVM Storage Operator.

    Example output

    clusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deleted

Verification

  • To verify that the LVM Storage Operator is uninstalled, run the following command:

    $ oc get csv -n <namespace>

    If the LVM Storage Operator was successfully uninstalled, it does not appear in the output of this command.

4.12.4.15. Uninstalling LVM Storage by using the web console

You can uninstall Logical Volume Manager (LVM) Storage using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the single-node OpenShift cluster as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
  • You have deleted the LVMCluster custom resource (CR).

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. Click LVM Storage in the openshift-storage namespace.
  4. Click the Details tab.
  5. From the Actions menu, click Uninstall Operator.
  6. Optional: When prompted, select the Delete all operand instances for this operator checkbox to delete the operand instances for LVM Storage.
  7. Click Uninstall.

4.12.4.16. Uninstalling LVM Storage installed using RHACM

To uninstall Logical Volume Manager (LVM) Storage that you installed using RHACM, you must delete the RHACM Policy custom resource (CR) that you created for installing and configuring LVM Storage.

Prerequisites

  • You have access to the RHACM cluster as a user with cluster-admin permissions.
  • You have deleted the following resources provisioned by LVM Storage:

    • Persistent volume claims (PVCs)
    • Volume snapshots
    • Volume clones

      You have also deleted any applications that are using these resources.

  • You have deleted the LVMCluster CR that you created using RHACM.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the RHACM Policy CR that you created for installing and configuring LVM Storage by running the following command:

    $ oc delete -f <policy> -n <namespace> 1
    1
    Replace <policy> with the name of the Policy CR YAML file.
  3. Create a Policy CR YAML file with the configuration to uninstall LVM Storage.

    Example Policy CR to uninstall LVM Storage

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-uninstall-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-uninstall-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-uninstall-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: uninstall-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: uninstall-lvms
    spec:
      disabled: false
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: uninstall-lvms
          spec:
            object-templates:
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: v1
                kind: Namespace
                metadata:
                  name: openshift-storage
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms-operator
                  namespace: openshift-storage
            remediationAction: enforce
            severity: low
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: policy-remove-lvms-crds
          spec:
            object-templates:
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: logicalvolumes.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmclusters.lvm.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmvolumegroupnodestatuses.lvm.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmvolumegroups.lvm.topolvm.io
            remediationAction: enforce
            severity: high

  4. Create the Policy CR by running the following command:

    $ oc create -f <policy> -ns <namespace>

4.12.4.17. Downloading log files and diagnostic information using must-gather

When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.

Procedure

  • Run the must-gather command from the client connected to the LVM Storage cluster:

    $ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.14 --dest-dir=<directory_name>

Additional resources

4.12.4.18. Troubleshooting persistent storage

While configuring persistent storage using Logical Volume Manager (LVM) Storage, you can encounter several issues that require troubleshooting.

4.12.4.18.1. Investigating a PVC stuck in the Pending state

A persistent volume claim (PVC) can get stuck in the Pending state for the following reasons:

  • Insufficient computing resources.
  • Network problems.
  • Mismatched storage class or node selector.
  • No available persistent volumes (PVs).
  • The node with the PV is in the Not Ready state.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  1. Retrieve the list of PVCs by running the following command:

    $ oc get pvc

    Example output

    NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvms-test   Pending                                      lvms-vg1       11s

  2. Inspect the events associated with a PVC stuck in the Pending state by running the following command:

    $ oc describe pvc <pvc_name> 1
    1
    Replace <pvc_name> with the name of the PVC. For example, lvms-vg1.

    Example output

    Type     Reason              Age               From                         Message
    ----     ------              ----              ----                         -------
    Warning  ProvisioningFailed  4s (x2 over 17s)  persistentvolume-controller  storageclass.storage.k8s.io "lvms-vg1" not found

4.12.4.18.2. Recovering from a missing storage class

If you encounter the storage class not found error, check the LVMCluster custom resource (CR) and ensure that all the Logical Volume Manager (LVM) Storage pods are in the Running state.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  1. Verify that the LVMCluster CR is present by running the following command:

    $ oc get lvmcluster -n openshift-storage

    Example output

    NAME            AGE
    my-lvmcluster   65m

  2. If the LVMCluster CR is not present, create an LVMCluster CR. For more information, see "Ways to create an LVMCluster custom resource".
  3. In the openshift-storage namespace, check that all the LVM Storage pods are in the Running state by running the following command:

    $ oc get pods -n openshift-storage

    Example output

    NAME                                  READY   STATUS    RESTARTS      AGE
    lvms-operator-7b9fb858cb-6nsml        3/3     Running   0             70m
    topolvm-controller-5dd9cf78b5-7wwr2   5/5     Running   0             66m
    topolvm-node-dr26h                    4/4     Running   0             66m
    vg-manager-r6zdv                      1/1     Running   0             66m

    The output of this command must contain a running instance of the following pods:

    • lvms-operator
    • vg-manager
    • topolvm-controller
    • topolvm-node

      If the topolvm-node pod is stuck in the Init state, it is due to a failure to locate an available disk for LVM Storage to use. To retrieve the necessary information to troubleshoot this issue, review the logs of the vg-manager pod by running the following command:

      $ oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage
4.12.4.18.3. Recovering from node failure

A persistent volume claim (PVC) can be stuck in the Pending state due to a node failure in the cluster.

To identify the failed node, you can examine the restart count of the topolvm-node pod. An increased restart count indicates potential problems with the underlying node, which might require further investigation and troubleshooting.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  • Examine the restart count of the topolvm-node pod instances by running the following command:

    $ oc get pods -n openshift-storage

    Example output

    NAME                                  READY   STATUS    RESTARTS      AGE
    lvms-operator-7b9fb858cb-6nsml        3/3     Running   0             70m
    topolvm-controller-5dd9cf78b5-7wwr2   5/5     Running   0             66m
    topolvm-node-dr26h                    4/4     Running   0             66m
    topolvm-node-54as8                    4/4     Running   0             66m
    topolvm-node-78fft                    4/4     Running   17 (8s ago)   66m
    vg-manager-r6zdv                      1/1     Running   0             66m
    vg-manager-990ut                      1/1     Running   0             66m
    vg-manager-an118                      1/1     Running   0             66m

Next steps

  • If the PVC is stuck in the Pending state even after you have resolved any issues with the node, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".

Additional resources

4.12.4.18.4. Recovering from disk failure

If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there can be a problem with the underlying volume or disk.

Disk and volume provisioning issues result with a generic error message such as Failed to provision volume with storage class <storage_class_name>. The generic error message is followed by a specific volume failure error message.

The following table describes the volume failure error messages:

Table 4.9. Volume failure error messages
Error messageDescription

Failed to check volume existence

Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures.

Failed to bind volume

Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC.

FailedMount or FailedAttachVolume

This error indicates problems when trying to mount the volume to a node. If the disk has failed, this error can appear when a pod tries to use the PVC.

FailedUnMount

This error indicates problems when trying to unmount a volume from a node. If the disk has failed, this error can appear when a pod tries to use the PVC.

Volume is already exclusively attached to one node and cannot be attached to another

This error can appear with storage solutions that do not support ReadWriteMany access modes.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  1. Inspect the events associated with a PVC by running the following command:

    $ oc describe pvc <pvc_name> 1
    1
    Replace <pvc_name> with the name of the PVC.
  2. Establish a direct connection to the host where the problem is occurring.
  3. Resolve the disk issue.

Next steps

  • If the volume failure messages persist or recur even after you have resolved the issue with the disk, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".

Additional resources

4.12.4.18.5. Performing a forced clean-up

If the disk or node-related problems persist even after you have completed the troubleshooting procedures, you must perform a forced clean-up. A forced clean-up is used to address persistent issues and ensure the proper functioning of Logical Volume Manager (LVM) Storage.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.
  • You have deleted all the persistent volume claims (PVCs) that were created by using LVM Storage.
  • You have stopped the pods that are using the PVCs that were created by using LVM Storage.

Procedure

  1. Switch to the openshift-storage namespace by running the following command:

    $ oc project openshift-storage
  2. Check if the LogicalVolume custom resources (CRs) are present by running the following command:

    $ oc get logicalvolume
    1. If the LogicalVolume CRs are present, delete them by running the following command:

      $ oc delete logicalvolume <name> 1
      1
      Replace <name> with the name of the LogicalVolume CR.
    2. After deleting the LogicalVolume CRs, remove their finalizers by running the following command:

      $ oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1
      1
      Replace <name> with the name of the LogicalVolume CR.
  3. Check if the LVMVolumeGroup CRs are present by running the following command:

    $ oc get lvmvolumegroup
    1. If the LVMVolumeGroup CRs are present, delete them by running the following command:

      $ oc delete lvmvolumegroup <name> 1
      1
      Replace <name> with the name of the LVMVolumeGroup CR.
    2. After deleting the LVMVolumeGroup CRs, remove their finalizers by running the following command:

      $ oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1
      1
      Replace <name> with the name of the LVMVolumeGroup CR.
  4. Delete any LVMVolumeGroupNodeStatus CRs by running the following command:

    $ oc delete lvmvolumegroupnodestatus --all
  5. Delete the LVMCluster CR by running the following command:

    $ oc delete lvmcluster --all
    1. After deleting the LVMCluster CR, remove its finalizer by running the following command:

      $ oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1
      1
      Replace <name> with the name of the LVMCluster CR.
Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.