Este contenido no está disponible en el idioma seleccionado.

Chapter 4. Configuring persistent storage


4.1. Persistent storage using AWS Elastic Block Store

OpenShift Container Platform supports Amazon Elastic Block Store (EBS) volumes. You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can dynamically provision Amazon EBS volumes. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS. By default, newly created clusters using OpenShift Container Platform version 4.10 and later use gp3 storage and the AWS EBS CSI driver.

Important

High-availability of storage in the infrastructure is left to the underlying storage provider.

Important

OpenShift Container Platform 4.12 and later provides automatic migration for the AWS Block in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

4.1.1. Creating the EBS storage class

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.

4.1.2. Creating the persistent volume claim

Prerequisites

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
  2. In the persistent volume claims overview, click Create Persistent Volume Claim.
  3. Define the desired options on the page that appears.

    1. Select the previously-created storage class from the drop-down menu.
    2. Enter a unique name for the storage claim.
    3. Select the access mode. This selection determines the read and write access for the storage claim.
    4. Define the size of the storage claim.
  4. Click Create to create the persistent volume claim and generate a persistent volume.

4.1.3. Volume format

Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.

This verification enables you to use unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.

4.1.4. Maximum number of EBS volumes on a node

By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits. The volume limit depends on the instance type.

Important

As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, which means you could have up to 39 EBS volumes of each type.

For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see AWS Elastic Block Store CSI Driver Operator.

4.1.5. Encrypting container persistent volumes on AWS with a KMS key

Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS.

Prerequisites

  • Underlying infrastructure must contain storage.
  • You must create a customer KMS key on AWS.

Procedure

  1. Create a storage class:

    $ cat << EOF | oc create -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <storage-class-name> 1
    parameters:
      fsType: ext4 2
      encrypted: "true"
      kmsKeyId: keyvalue 3
    provisioner: ebs.csi.aws.com
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    EOF
    1
    Specifies the name of the storage class.
    2
    File system that is created on provisioned volumes.
    3
    Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the encrypted field is set to true, then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation.
  2. Create a persistent volume claim (PVC) with the storage class specifying the KMS key:

    $ cat << EOF | oc create -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: mypvc
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Filesystem
      storageClassName: <storage-class-name>
      resources:
        requests:
          storage: 1Gi
    EOF
  3. Create workload containers to consume the PVC:

    $ cat << EOF | oc create -f -
    kind: Pod
    metadata:
      name: mypod
    spec:
      containers:
        - name: httpd
          image: quay.io/centos7/httpd-24-centos7
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /mnt/storage
              name: data
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: mypvc
    EOF

4.1.6. Additional resources

4.2. Persistent storage using Azure

OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

OpenShift Container Platform 4.11 and later provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

Important

High availability of storage in the infrastructure is left to the underlying storage provider.

Additional resources

4.2.1. Creating the Azure storage class

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.

Procedure

  1. In the OpenShift Container Platform console, click Storage Storage Classes.
  2. In the storage class overview, click Create Storage Class.
  3. Define the desired options on the page that appears.

    1. Enter a name to reference the storage class.
    2. Enter an optional description.
    3. Select the reclaim policy.
    4. Select kubernetes.io/azure-disk from the drop down list.

      1. Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Premium_LRS, PremiumV2_LRS, Standard_LRS, StandardSSD_LRS, and UltraSSD_LRS.

        Important

        The skuname PremiumV2_LRS is not supported in all regions, and in some supported regions, not all of the availability zones are supported. For more information, see Azure doc.

      2. Enter the kind of account. Valid options are shared, dedicated, and managed.

        Important

        Red Hat only supports the use of kind: Managed in the storage class.

        With Shared and Dedicated, Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes.

    5. Enter additional parameters for the storage class as desired.
  4. Click Create to create the storage class.

Additional resources

4.2.2. Creating the persistent volume claim

Prerequisites

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
  2. In the persistent volume claims overview, click Create Persistent Volume Claim.
  3. Define the desired options on the page that appears.

    1. Select the previously-created storage class from the drop-down menu.
    2. Enter a unique name for the storage claim.
    3. Select the access mode. This selection determines the read and write access for the storage claim.
    4. Define the size of the storage claim.
  4. Click Create to create the persistent volume claim and generate a persistent volume.

4.2.3. Volume format

Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.

This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.

4.2.4. Machine sets that deploy machines with ultra disks using PVCs

You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.

Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.

4.2.4.1. Creating machines with ultra disks by using machine sets

You can deploy machines with ultra disks on Azure by editing your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  1. Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command:

    $ oc edit machineset <machine-set-name>

    where <machine-set-name> is the machine set that you want to provision machines with ultra disks.

  2. Add the following lines in the positions indicated:

    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    spec:
      template:
        spec:
          metadata:
            labels:
              disk: ultrassd 1
          providerSpec:
            value:
              ultraSSDCapability: Enabled 2
    1
    Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value.
    2
    These lines enable the use of ultra disks.
  3. Create a machine set using the updated configuration by running the following command:

    $ oc create -f <machine-set-name>.yaml
  4. Create a storage class that contains the following YAML definition:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ultra-disk-sc 1
    parameters:
      cachingMode: None
      diskIopsReadWrite: "2000" 2
      diskMbpsReadWrite: "320" 3
      kind: managed
      skuname: UltraSSD_LRS
    provisioner: disk.csi.azure.com 4
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer 5
    1
    Specify the name of the storage class. This procedure uses ultra-disk-sc for this value.
    2
    Specify the number of IOPS for the storage class.
    3
    Specify the throughput in MBps for the storage class.
    4
    For Azure Kubernetes Service (AKS) version 1.21 or later, use disk.csi.azure.com. For earlier versions of AKS, use kubernetes.io/azure-disk.
    5
    Optional: Specify this parameter to wait for the creation of the pod that will use the disk.
  5. Create a persistent volume claim (PVC) to reference the ultra-disk-sc storage class that contains the following YAML definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: ultra-disk 1
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: ultra-disk-sc 2
      resources:
        requests:
          storage: 4Gi 3
    1
    Specify the name of the PVC. This procedure uses ultra-disk for this value.
    2
    This PVC references the ultra-disk-sc storage class.
    3
    Specify the size for the storage class. The minimum value is 4Gi.
  6. Create a pod that contains the following YAML definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx-ultra
    spec:
      nodeSelector:
        disk: ultrassd 1
      containers:
      - name: nginx-ultra
        image: alpine:latest
        command:
          - "sleep"
          - "infinity"
        volumeMounts:
        - mountPath: "/mnt/azure"
          name: volume
      volumes:
        - name: volume
          persistentVolumeClaim:
            claimName: ultra-disk 2
    1
    Specify the label of the machine set that enables the use of ultra disks. This procedure uses disk.ultrassd for this value.
    2
    This pod references the ultra-disk PVC.

Verification

  1. Validate that the machines are created by running the following command:

    $ oc get machines

    The machines should be in the Running state.

  2. For a machine that is running and has a node attached, validate the partition by running the following command:

    $ oc debug node/<node-name> -- chroot /host lsblk

    In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.

Next steps

  • To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ssd-benchmark1
    spec:
      containers:
      - name: ssd-benchmark1
        image: nginx
        ports:
          - containerPort: 80
            name: "http-server"
        volumeMounts:
        - name: lun0p1
          mountPath: "/tmp"
      volumes:
        - name: lun0p1
          hostPath:
            path: /var/lib/lun0p1
            type: DirectoryOrCreate
      nodeSelector:
        disktype: ultrassd

4.2.4.2. Troubleshooting resources for machine sets that enable ultra disks

Use the information in this section to understand and recover from issues you might encounter.

4.2.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk

If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating state and an alert is triggered.

For example, if the additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears:

StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
  • To resolve this issue, describe the pod by running the following command:

    $ oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>

4.3. Persistent storage using Azure File

OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically.

Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications.

Important

High availability of storage in the infrastructure is left to the underlying storage provider.

Important

Azure File volumes use Server Message Block.

Important

OpenShift Container Platform 4.13 and later provides automatic migration for the Azure File in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

Additional resources

4.3.1. Create the Azure File share persistent volume claim

To create the persistent volume claim, you must first define a Secret object that contains the Azure account and key. This secret is used in the PersistentVolume definition, and will be referenced by the persistent volume claim for use in applications.

Prerequisites

  • An Azure File share exists.
  • The credentials to access this share, specifically the storage account and key, are available.

Procedure

  1. Create a Secret object that contains the Azure File credentials:

    $ oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1
      --from-literal=azurestorageaccountkey=<storage-account-key> 2
    1
    The Azure File storage account name.
    2
    The Azure File storage account key.
  2. Create a PersistentVolume object that references the Secret object you created:

    apiVersion: "v1"
    kind: "PersistentVolume"
    metadata:
      name: "pv0001" 1
    spec:
      capacity:
        storage: "5Gi" 2
      accessModes:
        - "ReadWriteOnce"
      storageClassName: azure-file-sc
      azureFile:
        secretName: <secret-name> 3
        shareName: share-1 4
        readOnly: false
    1
    The name of the persistent volume.
    2
    The size of this persistent volume.
    3
    The name of the secret that contains the Azure File share credentials.
    4
    The name of the Azure File share.
  3. Create a PersistentVolumeClaim object that maps to the persistent volume you created:

    apiVersion: "v1"
    kind: "PersistentVolumeClaim"
    metadata:
      name: "claim1" 1
    spec:
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          storage: "5Gi" 2
      storageClassName: azure-file-sc 3
      volumeName: "pv0001" 4
    1
    The name of the persistent volume claim.
    2
    The size of this persistent volume claim.
    3
    The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the PersistentVolume definition.
    4
    The name of the existing PersistentVolume object that references the Azure File share.

4.3.2. Mount the Azure File share in a pod

After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.

Prerequisites

  • A persistent volume claim exists that is mapped to the underlying Azure File share.

Procedure

  • Create a pod that mounts the existing persistent volume claim:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-name 1
    spec:
      containers:
        ...
        volumeMounts:
        - mountPath: "/data" 2
          name: azure-file-share
      volumes:
        - name: azure-file-share
          persistentVolumeClaim:
            claimName: claim1 3
    1
    The name of the pod.
    2
    The path to mount the Azure File share inside the pod. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    3
    The name of the PersistentVolumeClaim object that has been previously created.

4.4. Persistent storage using Cinder

OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed.

Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

OpenShift Container Platform 4.11 and later provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.

Additional resources

  • For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder.

4.4.1. Manual provisioning with Cinder

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Prerequisites

  • OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP)
  • Cinder volume ID

4.4.1.1. Creating the persistent volume

You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform:

Procedure

  1. Save your object definition to a file.

    cinder-persistentvolume.yaml

    apiVersion: "v1"
    kind: "PersistentVolume"
    metadata:
      name: "pv0001" 1
    spec:
      capacity:
        storage: "5Gi" 2
      accessModes:
        - "ReadWriteOnce"
      cinder: 3
        fsType: "ext3" 4
        volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5

    1
    The name of the volume that is used by persistent volume claims or pods.
    2
    The amount of storage allocated to this volume.
    3
    Indicates cinder for Red Hat OpenStack Platform (RHOSP) Cinder volumes.
    4
    The file system that is created when the volume is mounted for the first time.
    5
    The Cinder volume to use.
    Important

    Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure.

  2. Create the object definition file you saved in the previous step.

    $ oc create -f cinder-persistentvolume.yaml

4.4.1.2. Persistent volume formatting

You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use.

Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.

4.4.1.3. Cinder volume security

If you use Cinder PVs in your application, configure security for their deployment configurations.

Prerequisites

  • An SCC must be created that uses the appropriate fsGroup strategy.

Procedure

  1. Create a service account and add it to the SCC:

    $ oc create serviceaccount <service_account>
    $ oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>
  2. In your application’s deployment configuration, provide the service account name and securityContext:

    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: frontend-1
    spec:
      replicas: 1  1
      selector:    2
        name: frontend
      template:    3
        metadata:
          labels:  4
            name: frontend 5
        spec:
          containers:
          - image: openshift/hello-openshift
            name: helloworld
            ports:
            - containerPort: 8080
              protocol: TCP
          restartPolicy: Always
          serviceAccountName: <service_account> 6
          securityContext:
            fsGroup: 7777 7
    1
    The number of copies of the pod to run.
    2
    The label selector of the pod to run.
    3
    A template for the pod that the controller creates.
    4
    The labels on the pod. They must include labels from the label selector.
    5
    The maximum name length after expanding any parameters is 63 characters.
    6
    Specifies the service account you created.
    7
    Specifies an fsGroup for the pods.

4.5. Persistent storage using Fibre Channel

OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed.

Important

Persistent storage using Fibre Channel is not supported on ARM architecture based infrastructures.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

High availability of storage in the infrastructure is left to the underlying storage provider.

Additional resources

4.5.1. Provisioning

To provision Fibre Channel volumes using the PersistentVolume API the following must be available:

  • The targetWWNs (array of Fibre Channel target’s World Wide Names).
  • A valid LUN number.
  • The filesystem type.

A persistent volume and a LUN have a one-to-one mapping between them.

Prerequisites

  • Fibre Channel LUNs must exist in the underlying infrastructure.

PersistentVolume object definition

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  fc:
    wwids: [scsi-3600508b400105e210000900000490000] 1
    targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2
    lun: 2 3
    fsType: ext4

1
World wide identifiers (WWIDs). Either FC wwids or a combination of FC targetWWNs and lun must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83) or Unit Serial Number (page 0x80). FC WWIDs are identified as /dev/disk/by-id/ to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems.
2 3
Fibre Channel WWNs are identified as /dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#>, but you do not need to provide any part of the path leading up to the WWN, including the 0x, and anything after, including the - (hyphen).
Important

Changing the value of the fstype parameter after the volume has been formatted and provisioned can result in data loss and pod failure.

4.5.1.1. Enforcing disk quotas

Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes.

Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.

4.5.1.2. Fibre Channel volume security

Users request storage with a persistent volume claim. This claim only lives in the user’s namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail.

Each Fibre Channel LUN must be accessible by all nodes in the cluster.

4.6. Persistent storage using FlexVolume

Important

FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver.

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers.

To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications.

Pods interact with FlexVolume drivers through the flexvolume in-tree plugin.

Additional resources

4.6.1. About FlexVolume drivers

A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume object with flexVolume as the source.

Important

Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume.

4.6.2. FlexVolume driver example

The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data.

The FlexVolume driver contains:

  • All flexVolume.options.
  • Some options from flexVolume prefixed by kubernetes.io/, such as fsType and readwrite.
  • The content of the referenced secret, if specified, prefixed by kubernetes.io/secret/.

FlexVolume driver JSON input example

{
	"fooServer": "192.168.0.1:1234", 1
        "fooVolumeName": "bar",
	"kubernetes.io/fsType": "ext4", 2
	"kubernetes.io/readwrite": "ro", 3
	"kubernetes.io/secret/<key name>": "<key value>", 4
	"kubernetes.io/secret/<another key name>": "<another key value>",
}

1
All options from flexVolume.options.
2
The value of flexVolume.fsType.
3
ro/rw based on flexVolume.readOnly.
4
All keys and their values from the secret referenced by flexVolume.secretRef.

OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation.

FlexVolume driver default output example

{
	"status": "<Success/Failure/Not supported>",
	"message": "<Reason for success/failure>"
}

Exit code of the driver should be 0 for success and 1 for error.

Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation.

4.6.3. Installing FlexVolume drivers

FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required.

Prerequisites

  • FlexVolume drivers must implement these operations:

    init

    Initializes the driver. It is called during initialization of all nodes.

    • Arguments: none
    • Executed on: node
    • Expected output: default JSON
    mount

    Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device.

    • Arguments: <mount-dir> <json>
    • Executed on: node
    • Expected output: default JSON
    unmount

    Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting.

    • Arguments: <mount-dir>
    • Executed on: node
    • Expected output: default JSON
    mountdevice
    Mounts a volume’s device to a directory where individual pods can then bind mount.

This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out.

  • Arguments: <mount-dir> <json>
  • Executed on: node
  • Expected output: default JSON

    unmountdevice
    Unmounts a volume’s device from a directory.
  • Arguments: <mount-dir>
  • Executed on: node
  • Expected output: default JSON

    • All other operations should return JSON with {"status": "Not supported"} and exit code 1.

Procedure

To install the FlexVolume driver:

  1. Ensure that the executable file exists on all nodes in the cluster.
  2. Place the executable file at the volume plugin path: /etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver>.

For example, to install the FlexVolume driver for the storage foo, place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo.

4.6.4. Consuming storage using FlexVolume drivers

Each PersistentVolume object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume.

Procedure

  • Use the PersistentVolume object to reference the installed storage.

Persistent volume object definition using FlexVolume drivers example

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001 1
spec:
  capacity:
    storage: 1Gi 2
  accessModes:
    - ReadWriteOnce
  flexVolume:
    driver: openshift.com/foo 3
    fsType: "ext4" 4
    secretRef: foo-secret 5
    readOnly: true 6
    options: 7
      fooServer: 192.168.0.1:1234
      fooVolumeName: bar

1
The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage.
2
The amount of storage allocated to this volume.
3
The name of the driver. This field is mandatory.
4
The file system that is present on the volume. This field is optional.
5
The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional.
6
The read-only flag. This field is optional.
7
The additional options for the FlexVolume driver. In addition to the flags specified by the user in the options field, the following flags are also passed to the executable:
"fsType":"<FS type>",
"readwrite":"<rw>",
"secret/key1":"<secret1>"
...
"secret/keyN":"<secretN>"
Note

Secrets are passed only to mount or unmount call-outs.

4.7. Persistent storage using GCE Persistent Disk

OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.

GCE Persistent Disk volumes can be provisioned dynamically.

Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

OpenShift Container Platform 4.12 and later provides automatic migration for the GCE Persist Disk in-tree volume plugin to its equivalent CSI driver.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes.

For more information about migration, see CSI automatic migration.

Important

High availability of storage in the infrastructure is left to the underlying storage provider.

Additional resources

4.7.1. Creating the GCE storage class

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.

4.7.2. Creating the persistent volume claim

Prerequisites

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
  2. In the persistent volume claims overview, click Create Persistent Volume Claim.
  3. Define the desired options on the page that appears.

    1. Select the previously-created storage class from the drop-down menu.
    2. Enter a unique name for the storage claim.
    3. Select the access mode. This selection determines the read and write access for the storage claim.
    4. Define the size of the storage claim.
  4. Click Create to create the persistent volume claim and generate a persistent volume.

4.7.3. Volume format

Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.

This verification enables you to use unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use.

4.8. Persistent storage using iSCSI

You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI. Some familiarity with Kubernetes and iSCSI is assumed.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.

Important

High-availability of storage in the infrastructure is left to the underlying storage provider.

Important

When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860 and 3260.

Important

Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi. The iscsi-initiator-utils package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS).

For more information, see Managing Storage Devices.

4.8.1. Provisioning

Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API.

PersistentVolume object definition

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
     targetPortal: 10.16.154.81:3260
     iqn: iqn.2014-12.example.server:storage.target00
     lun: 0
     fsType: 'ext4'

4.8.2. Enforcing disk quotas

Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes.

Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi) and be matched with a corresponding volume of equal or greater capacity.

4.8.3. iSCSI volume security

Users request storage with a PersistentVolumeClaim object. This claim only lives in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail.

Each iSCSI LUN must be accessible by all nodes in the cluster.

4.8.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration

Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
    targetPortal: 10.0.0.1:3260
    iqn: iqn.2016-04.test.com:storage.target00
    lun: 0
    fsType: ext4
    chapAuthDiscovery: true 1
    chapAuthSession: true 2
    secretRef:
      name: chap-secret 3
1
Enable CHAP authentication of iSCSI discovery.
2
Enable CHAP authentication of iSCSI session.
3
Specify name of Secrets object with user name + password. This Secret object must be available in all namespaces that can use the referenced volume.

4.8.4. iSCSI multipathing

For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail.

To specify multi-paths in the pod specification, use the portals field. For example:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
    targetPortal: 10.0.0.1:3260
    portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1
    iqn: iqn.2016-04.test.com:storage.target00
    lun: 0
    fsType: ext4
    readOnly: false
1
Add additional target portals using the portals field.

4.8.5. iSCSI custom initiator IQN

Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs.

To specify a custom initiator IQN, use initiatorName field.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
    targetPortal: 10.0.0.1:3260
    portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260']
    iqn: iqn.2016-04.test.com:storage.target00
    lun: 0
    initiatorName: iqn.2016-04.test.com:custom.iqn 1
    fsType: ext4
    readOnly: false
1
Specify the name of the initiator.

4.9. Persistent storage using NFS

OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.

Additional resources

4.9.1. Provisioning

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required.

Procedure

  1. Create an object definition for the PV:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv0001 1
    spec:
      capacity:
        storage: 5Gi 2
      accessModes:
      - ReadWriteOnce 3
      nfs: 4
        path: /tmp 5
        server: 172.17.0.2 6
      persistentVolumeReclaimPolicy: Retain 7
    1
    The name of the volume. This is the PV identity in various oc <command> pod commands.
    2
    The amount of storage allocated to this volume.
    3
    Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes.
    4
    The volume type being used, in this case the nfs plugin.
    5
    The path that is exported by the NFS server.
    6
    The hostname or IP address of the NFS server.
    7
    The reclaim policy for the PV. This defines what happens to a volume when released.
    Note

    Each NFS volume must be mountable by all schedulable nodes in the cluster.

  2. Verify that the PV was created:

    $ oc get pv

    Example output

    NAME     LABELS    CAPACITY     ACCESSMODES   STATUS      CLAIM  REASON    AGE
    pv0001   <none>    5Gi          RWO           Available                    31s

  3. Create a persistent volume claim that binds to the new PV:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: nfs-claim1
    spec:
      accessModes:
        - ReadWriteOnce 1
      resources:
        requests:
          storage: 5Gi 2
      volumeName: pv0001
      storageClassName: ""
    1
    The access modes do not enforce security, but rather act as labels to match a PV to a PVC.
    2
    This claim looks for PVs offering 5Gi or greater capacity.
  4. Verify that the persistent volume claim was created:

    $ oc get pvc

    Example output

    NAME         STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    nfs-claim1   Bound    pv0001   5Gi        RWO                           2m

4.9.2. Enforcing disk quotas

You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume’s server and path is up to the administrator.

Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.

4.9.3. NFS volume security

This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux.

Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes section of their Pod definition.

The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plugin mounts the container’s NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior.

As an example, if the target NFS directory appears on the NFS server as:

$ ls -lZ /opt/nfs -d

Example output

drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0   /opt/nfs

$ id nfsnobody

Example output

uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)

Then the container must match SELinux labels, and either run with a UID of 65534, the nfsnobody owner, or with 5555 in its supplemental groups to access the directory.

Note

The owner ID of 65534 is used as an example. Even though NFS’s root_squash maps root, uid 0, to nfsnobody, uid 65534, NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports.

4.9.3.1. Group IDs

The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod.

Note

To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs.

Because the group ID on the example target NFS directory is 5555, the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example:

spec:
  containers:
    - name:
    ...
  securityContext: 1
    supplementalGroups: [5555] 2
1
securityContext must be defined at the pod level, not under a specific container.
2
An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated.

Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny, meaning that any supplied group ID is accepted without range checking.

As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed.

Note

To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification.

4.9.3.2. User IDs

User IDs can be defined in the container image or in the Pod definition.

Note

It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs.

In the example target NFS directory shown above, the container needs its UID set to 65534, ignoring group IDs for the moment, so the following can be added to the Pod definition:

spec:
  containers: 1
  - name:
  ...
    securityContext:
      runAsUser: 65534 2
1
Pods contain a securityContext definition specific to each container and a pod’s securityContext which applies to all containers defined in the pod.
2
65534 is the nfsnobody user.

Assuming that the project is default and the SCC is restricted, the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons:

  • It requests 65534 as its user ID.
  • All SCCs available to the pod are examined to see which SCC allows a user ID of 65534. While all policies of the SCCs are checked, the focus here is on user ID.
  • Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required.
  • 65534 is not included in the SCC or project’s user ID range.

It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed.

Note

To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification.

4.9.3.3. SELinux

Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default.

For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure.

Prerequisites

  • The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean.

Procedure

  • Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots.

    # setsebool -P virt_use_nfs 1

4.9.3.4. Export settings

To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:

  • Every export must be exported using the following format:

    /<example_fs> *(rw,root_squash)
  • The firewall must be configured to allow traffic to the mount point.

    • For NFSv4, configure the default port 2049 (nfs).

      NFSv4

      # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT

    • For NFSv3, there are three ports to configure: 2049 (nfs), 20048 (mountd), and 111 (portmapper).

      NFSv3

      # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT

      # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT
      # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
  • The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container’s primary UID, or supply the pod group access using supplementalGroups, as shown in the group IDs above.

4.9.4. Reclaiming resources

NFS implements the OpenShift Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume.

By default, PVs are set to Retain.

Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original.

For example, the administrator creates a PV named nfs1:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs1
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.1
    path: "/"

The user creates PVC1, which binds to nfs1. The user then deletes PVC1, releasing claim to nfs1. This results in nfs1 being Released. If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs2
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.1
    path: "/"

Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss.

4.9.5. Additional configuration and troubleshooting

Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply:

NFSv4 mount incorrectly shows all files with ownership of nobody:nobody

  • Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS.
  • See this Red Hat Solution.

Disabling ID mapping on NFSv4

  • On both the NFS client and server, run:

    # echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping

4.10. Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation.

Important

OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide.

4.11. Persistent storage using VMware vSphere volumes

OpenShift Container Platform allows use of VMware vSphere’s Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.

VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image.

Note

OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.

Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.

Important

For new installations, OpenShift Container Platform 4.13 and later provides automatic migration for the vSphere in-tree volume plugin to its equivalent CSI driver. Updating to OpenShift Container Platform 4.15 and later also provides automatic migration. For more information about updating and migration, see CSI automatic migration.

CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes.

Additional resources

4.11.1. Dynamically provisioning VMware vSphere volumes

Dynamically provisioning VMware vSphere volumes is the recommended method.

4.11.2. Prerequisites

  • An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support.

You can use either of the following procedures to dynamically provision these volumes using the default storage class.

4.11.2.1. Dynamically provisioning VMware vSphere volumes using the UI

OpenShift Container Platform installs a default storage class, named thin, that uses the thin disk format for provisioning volumes.

Prerequisites

  • Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. In the OpenShift Container Platform console, click Storage Persistent Volume Claims.
  2. In the persistent volume claims overview, click Create Persistent Volume Claim.
  3. Define the required options on the resulting page.

    1. Select the thin storage class.
    2. Enter a unique name for the storage claim.
    3. Select the access mode to determine the read and write access for the created storage claim.
    4. Define the size of the storage claim.
  4. Click Create to create the persistent volume claim and generate a persistent volume.

4.11.2.2. Dynamically provisioning VMware vSphere volumes using the CLI

OpenShift Container Platform installs a default StorageClass, named thin, that uses the thin disk format for provisioning volumes.

Prerequisites

  • Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure (CLI)

  1. You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml, with the following contents:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc 1
    spec:
      accessModes:
      - ReadWriteOnce 2
      resources:
        requests:
          storage: 1Gi 3
    1
    A unique name that represents the persistent volume claim.
    2
    The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
    3
    The size of the persistent volume claim.
  2. Enter the following command to create the PersistentVolumeClaim object from the file:

    $ oc create -f pvc.yaml

4.11.3. Statically provisioning VMware vSphere volumes

To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework.

Prerequisites

  • Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform.

Procedure

  1. Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods:

    • Create using vmkfstools. Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume:

      $ vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk
    • Create using vmware-diskmanager:

      $ shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk
  2. Create a persistent volume that references the VMDKs. Create a file, pv1.yaml, with the PersistentVolume object definition:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv1 1
    spec:
      capacity:
        storage: 1Gi 2
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      vsphereVolume: 3
        volumePath: "[datastore1] volumes/myDisk"  4
        fsType: ext4  5
    1
    The name of the volume. This name is how it is identified by persistent volume claims or pods.
    2
    The amount of storage allocated to this volume.
    3
    The volume type used, with vsphereVolume for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore.
    4
    The existing VMDK volume to use. If you used vmkfstools, you must enclose the datastore name in square brackets, [], in the volume definition, as shown previously.
    5
    The file system type to mount. For example, ext4, xfs, or other file systems.
    Important

    Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure.

  3. Create the PersistentVolume object from the file:

    $ oc create -f pv1.yaml
  4. Create a persistent volume claim that maps to the persistent volume you created in the previous step. Create a file, pvc1.yaml, with the PersistentVolumeClaim object definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc1 1
    spec:
      accessModes:
        - ReadWriteOnce 2
      resources:
       requests:
         storage: "1Gi" 3
      volumeName: pv1 4
    1
    A unique name that represents the persistent volume claim.
    2
    The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
    3
    The size of the persistent volume claim.
    4
    The name of the existing persistent volume.
  5. Create the PersistentVolumeClaim object from the file:

    $ oc create -f pvc1.yaml

4.11.3.1. Formatting VMware vSphere volumes

Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system.

Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs.

4.12. Persistent storage using local storage

4.12.1. Local storage overview

You can use any of the following solutions to provision local storage:

  • HostPath Provisioner (HPP)
  • Local Storage Operator (LSO)
  • Logical Volume Manager (LVM) Storage
Warning

These solutions support provisioning only node-local storage. The workloads are bound to the nodes that provide the storage. If the node becomes unavailable, the workload also becomes unavailable. To maintain workload availability despite node failures, you must ensure storage data replication through active or passive replication mechanisms.

4.12.1.1. Overview of HostPath Provisioner functionality

You can perform the following actions using HostPath Provisioner (HPP):

  • Map the host filesystem paths to storage classes for provisioning local storage.
  • Statically create storage classes to configure filesystem paths on a node for storage consumption.
  • Statically provision Persistent Volumes (PVs) based on the storage class.
  • Create workloads and PersistentVolumeClaims (PVCs) while being aware of the underlying storage topology.
Note

HPP is available in upstream Kubernetes. However, it is not recommended to use HPP from upstream Kubernetes.

4.12.1.2. Overview of Local Storage Operator functionality

You can perform the following actions using Local Storage Operator (LSO):

  • Assign the storage devices (disks or partitions) to the storage classes without modifying the device configuration.
  • Statically provision PVs and storage classes by configuring the LocalVolume custom resource (CR).
  • Create workloads and PVCs while being aware of the underlying storage topology.
Note

LSO is developed and delivered by Red Hat.

4.12.1.3. Overview of LVM Storage functionality

You can perform the following actions using Logical Volume Manager (LVM) Storage:

  • Configure storage devices (disks or partitions) as lvm2 volume groups and expose the volume groups as storage classes.
  • Create workloads and request storage by using PVCs without considering the node topology.

LVM Storage uses the TopoLVM CSI driver to dynamically allocate storage space to the nodes in the topology and provision PVs.

Note

LVM Storage is developed and maintained by Red Hat. The CSI driver provided with LVM Storage is the upstream project "topolvm".

4.12.1.4. Comparison of LVM Storage, LSO, and HPP

The following sections compare the functionalities provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage.

4.12.1.4.1. Comparison of the support for storage types and filesystems

The following table compares the support for storage types and filesystems provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage:

Table 4.1. Comparison of the support for storage types and filesystems
FunctionalityLVM StorageLSOHPP

Support for block storage

Yes

Yes

No

Support for file storage

Yes

Yes

Yes

Support for object storage [1]

No

No

No

Available filesystems

ext4, xfs

ext4, xfs

Any mounted system available on the node is supported.

  1. None of the solutions (LVM Storage, LSO, and HPP) provide support for object storage. Therefore, if you want to use object storage, you need an S3 object storage solution, such as MultiClusterGateway from the Red Hat OpenShift Data Foundation. All of the solutions can serve as underlying storage providers for the S3 object storage solutions.
4.12.1.4.2. Comparison of the support for core functionalities

The following table compares how LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) support core functionalities for provisioning local storage:

Table 4.2. Comparison of the support for core functionalities
FunctionalityLVM StorageLSOHPP

Support for automatic file system formatting

Yes

Yes

N/A

Support for dynamic provisioning

Yes

No

No

Support for using software Redundant Array of Independent Disks (RAID) arrays

Yes

Supported on 4.15 and later.

Yes

Yes

Support for transparent disk encryption

Yes

Supported on 4.16 and later.

Yes

Yes

Support for volume based disk encryption

No

No

No

Support for disconnected installation

Yes

Yes

Yes

Support for PVC expansion

Yes

No

No

Support for volume snapshots and volume clones

Yes

No

No

Support for thin provisioning

Yes

Devices are thin-provisioned by default.

Yes

You can configure the devices to point to the thin-provisioned volumes

Yes

You can configure a path to point to the thin-provisioned volumes.

Support for automatic disk discovery and setup

Yes

Automatic disk discovery is available during installation and runtime. You can also dynamically add the disks to the LVMCluster custom resource (CR) to increase the storage capacity of the existing storage classes.

Technology Preview

Automatic disk discovery is available during installation.

No

4.12.1.4.3. Comparison of performance and isolation capabilities

The following table compares the performance and isolation capabilities of LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) in provisioning local storage.

Table 4.3. Comparison of performance and isolation capabilities
FunctionalityLVM StorageLSOHPP

Performance

I/O speed is shared for all workloads that use the same storage class.

Block storage allows direct I/O operations.

Thin provisioning can affect the performance.

I/O depends on the LSO configuration.

Block storage allows direct I/O operations.

I/O speed is shared for all workloads that use the same storage class.

The restrictions imposed by the underlying filesystem can affect the I/O speed.

Isolation boundary [1]

LVM Logical Volume (LV)

It provides higher level of isolation compared to HPP.

LVM Logical Volume (LV)

It provides higher level of isolation compared to HPP

Filesystem path

It provides lower level of isolation compared to LSO and LVM Storage.

  1. Isolation boundary refers to the level of separation between different workloads or applications that use local storage resources.
4.12.1.4.4. Comparison of the support for additional functionalities

The following table compares the additional features provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage:

Table 4.4. Comparison of the support for additional functionalities
FunctionalityLVM StorageLSOHPP

Support for generic ephemeral volumes

Yes

No

No

Support for CSI inline ephemeral volumes

No

No

No

Support for storage topology

Yes

Supports CSI node topology

Yes

LSO provides partial support for storage topology through node tolerations.

No

Support for ReadWriteMany (RWX) access mode [1]

No

No

No

  1. All of the solutions (LVM Storage, LSO, and HPP) have the ReadWriteOnce (RWO) access mode. RWO access mode allows access from multiple pods on the same node.

4.12.2. Persistent storage using local volumes

OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface.

Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.

Note

Local volumes can only be used as a statically created persistent volume.

4.12.2.1. Installing the Local Storage Operator

The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster.

Prerequisites

  • Access to the OpenShift Container Platform web console or command-line interface (CLI).

Procedure

  1. Create the openshift-local-storage project:

    $ oc adm new-project openshift-local-storage
  2. Optional: Allow local storage creation on infrastructure nodes.

    You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring.

    You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes.

    To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command:

    $ oc annotate namespace openshift-local-storage openshift.io/node-selector=''
  3. Optional: Allow local storage to run on the management pool of CPUs in single-node deployment.

    Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the management pool. Perform this step on single-node installations that use management workload partitioning.

    To allow Local Storage Operator to run on the management CPU pool, run following commands:

    $ oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'

From the UI

To install the Local Storage Operator from the web console, follow these steps:

  1. Log in to the OpenShift Container Platform web console.
  2. Navigate to Operators OperatorHub.
  3. Type Local Storage into the filter box to locate the Local Storage Operator.
  4. Click Install.
  5. On the Install Operator page, select A specific namespace on the cluster. Select openshift-local-storage from the drop-down menu.
  6. Adjust the values for Update Channel and Approval Strategy to the values that you want.
  7. Click Install.

Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console.

From the CLI

  1. Install the Local Storage Operator from the CLI.

    1. Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml:

      Example openshift-local-storage.yaml

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: local-operator-group
        namespace: openshift-local-storage
      spec:
        targetNamespaces:
          - openshift-local-storage
      ---
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: local-storage-operator
        namespace: openshift-local-storage
      spec:
        channel: stable
        installPlanApproval: Automatic 1
        name: local-storage-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace

      1
      The user approval policy for an install plan.
  2. Create the Local Storage Operator object by entering the following command:

    $ oc apply -f openshift-local-storage.yaml

    At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

  3. Verify local storage installation by checking that all pods and the Local Storage Operator have been created:

    1. Check that all the required pods have been created:

      $ oc -n openshift-local-storage get pods

      Example output

      NAME                                      READY   STATUS    RESTARTS   AGE
      local-storage-operator-746bf599c9-vlt5t   1/1     Running   0          19m

    2. Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project:

      $ oc get csvs -n openshift-local-storage

      Example output

      NAME                                         DISPLAY         VERSION               REPLACES   PHASE
      local-storage-operator.4.2.26-202003230335   Local Storage   4.2.26-202003230335              Succeeded

After all checks have passed, the Local Storage Operator is installed successfully.

4.12.2.2. Provisioning local volumes by using the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Prerequisites

  • The Local Storage Operator is installed.
  • You have a local disk that meets the following conditions:

    • It is attached to a node.
    • It is not mounted.
    • It does not contain partitions.

Procedure

  1. Create the local volume resource. This resource must define the nodes and paths to the local volumes.

    Note

    Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs).

    Example: Filesystem

    apiVersion: "local.storage.openshift.io/v1"
    kind: "LocalVolume"
    metadata:
      name: "local-disks"
      namespace: "openshift-local-storage" 1
    spec:
      nodeSelector: 2
        nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-140-183
              - ip-10-0-158-139
              - ip-10-0-164-33
      storageClassDevices:
        - storageClassName: "local-sc" 3
          forceWipeDevicesAndDestroyAllData: false 4
          volumeMode: Filesystem 5
          fsType: xfs 6
          devicePaths: 7
            - /path/to/device 8

    1
    The namespace where the Local Storage Operator is installed.
    2
    Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3
    The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
    4
    This setting defines whether or not to call wipefs, which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator (LSO) provisioning. No other data besides signatures is erased. The default is "false" (wipefs is not invoked). Setting forceWipeDevicesAndDestroyAllData to "true" can be useful in scenarios where previous data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. Such cases can include single-node OpenShift (SNO) cluster environments where a node can be redeployed multiple times or when using OpenShift Data Foundation (ODF), where previous data can remain on the disks planned to be consumed as object storage devices (OSDs).
    5
    The volume mode, either Filesystem or Block, that defines the type of local volumes.
    Note

    A raw block volume (volumeMode: Block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    6
    The file system that is created when the local volume is mounted for the first time.
    7
    The path containing a list of local storage devices to choose from.
    8
    Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as /dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
    Note

    If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

    Example: Block

    apiVersion: "local.storage.openshift.io/v1"
    kind: "LocalVolume"
    metadata:
      name: "local-disks"
      namespace: "openshift-local-storage" 1
    spec:
      nodeSelector: 2
        nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-136-143
              - ip-10-0-140-255
              - ip-10-0-144-180
      storageClassDevices:
        - storageClassName: "local-sc" 3
          forceWipeDevicesAndDestroyAllData: false 4
          volumeMode: Block 5
          devicePaths: 6
            - /path/to/device 7

    1
    The namespace where the Local Storage Operator is installed.
    2
    Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3
    The name of the storage class to use when creating persistent volume objects.
    4
    This setting defines whether or not to call wipefs, which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator (LSO) provisioning. No other data besides signatures is erased. The default is "false" (wipefs is not invoked). Setting forceWipeDevicesAndDestroyAllData to "true" can be useful in scenarios where previous data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. Such cases can include single-node OpenShift (SNO) cluster environments where a node can be redeployed multiple times or when using OpenShift Data Foundation (ODF), where previous data can remain on the disks planned to be consumed as object storage devices (OSDs).
    5
    The volume mode, either Filesystem or Block, that defines the type of local volumes.
    6
    The path containing a list of local storage devices to choose from.
    7
    Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
    Note

    If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

  2. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created:

    $ oc create -f <local-volume>.yaml
  3. Verify that the provisioner was created and that the corresponding daemon sets were created:

    $ oc get all -n openshift-local-storage

    Example output

    NAME                                          READY   STATUS    RESTARTS   AGE
    pod/diskmaker-manager-9wzms                   1/1     Running   0          5m43s
    pod/diskmaker-manager-jgvjp                   1/1     Running   0          5m43s
    pod/diskmaker-manager-tbdsj                   1/1     Running   0          5m43s
    pod/local-storage-operator-7db4bd9f79-t6k87   1/1     Running   0          14m
    
    NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
    service/local-storage-operator-metrics   ClusterIP   172.30.135.36   <none>        8383/TCP,8686/TCP   14m
    
    NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/diskmaker-manager   3         3         3       3            3           <none>          5m43s
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/local-storage-operator   1/1     1            1           14m
    
    NAME                                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/local-storage-operator-7db4bd9f79   1         1         1       14m

    Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid.

  4. Verify that the persistent volumes were created:

    $ oc get pv

    Example output

    NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    local-pv-1cec77cf   100Gi      RWO            Delete           Available           local-sc                88m
    local-pv-2ef7cd2a   100Gi      RWO            Delete           Available           local-sc                82m
    local-pv-3fa1c73    100Gi      RWO            Delete           Available           local-sc                48m

Important

Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation.

4.12.2.3. Provisioning local volumes without the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Important

Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs.

Prerequisites

  • Local disks are attached to the OpenShift Container Platform nodes.

Procedure

  1. Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml, with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes.

    Note

    Do not use different storage class names for the same device. Doing so will create multiple PVs.

    example-pv-filesystem.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-pv-filesystem
    spec:
      capacity:
        storage: 100Gi
      volumeMode: Filesystem 1
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-sc 2
      local:
        path: /dev/xvdf 3
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - example-node

    1
    The volume mode, either Filesystem or Block, that defines the type of PVs.
    2
    The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs.
    3
    The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode.
    Note

    A raw block volume (volumeMode: block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    example-pv-block.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-pv-block
    spec:
      capacity:
        storage: 100Gi
      volumeMode: Block 1
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-sc 2
      local:
        path: /dev/xvdf 3
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - example-node

    1
    The volume mode, either Filesystem or Block, that defines the type of PVs.
    2
    The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs.
    3
    The path containing a list of local storage devices to choose from.
  2. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created:

    $ oc create -f <example-pv>.yaml
  3. Verify that the local PV was created:

    $ oc get pv

    Example output

    NAME                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                STORAGECLASS    REASON   AGE
    example-pv-filesystem   100Gi      RWO            Delete           Available                        local-sc            3m47s
    example-pv1             1Gi        RWO            Delete           Bound       local-storage/pvc1   local-sc            12h
    example-pv2             1Gi        RWO            Delete           Bound       local-storage/pvc2   local-sc            12h
    example-pv3             1Gi        RWO            Delete           Bound       local-storage/pvc3   local-sc            12h

4.12.2.4. Creating the local volume persistent volume claim

Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod.

Prerequisites

  • Persistent volumes have been created using the local volume provisioner.

Procedure

  1. Create the PVC using the corresponding storage class:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: local-pvc-name 1
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem 2
      resources:
        requests:
          storage: 100Gi 3
      storageClassName: local-sc 4
    1
    Name of the PVC.
    2
    The type of the PVC. Defaults to Filesystem.
    3
    The amount of storage available to the PVC.
    4
    Name of the storage class required by the claim.
  2. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created:

    $ oc create -f <local-pvc>.yaml

4.12.2.5. Attach the local claim

After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource.

Prerequisites

  • A persistent volume claim exists in the same namespace.

Procedure

  1. Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod:

    apiVersion: v1
    kind: Pod
    spec:
    # ...
      containers:
        volumeMounts:
        - name: local-disks 1
          mountPath: /data 2
      volumes:
      - name: local-disks
        persistentVolumeClaim:
          claimName: local-pvc-name 3
    # ...
    1
    The name of the volume to mount.
    2
    The path inside the pod where the volume is mounted. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    3
    The name of the existing persistent volume claim to use.
  2. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created:

    $ oc create -f <local-pod>.yaml

4.12.2.6. Automating discovery and provisioning for local storage devices

The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices.

Important

Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Important

Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment.

Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices.

Warning

Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet that target a node more than once is not supported.

Prerequisites

  • You have cluster administrator permissions.
  • You have installed the Local Storage Operator.
  • You have attached local disks to OpenShift Container Platform nodes.
  • You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI).

Procedure

  1. To enable automatic discovery of local devices from the web console:

    1. Click Operators Installed Operators.
    2. In the openshift-local-storage namespace, click Local Storage.
    3. Click the Local Volume Discovery tab.
    4. Click Create Local Volume Discovery and then select either Form view or YAML view.
    5. Configure the LocalVolumeDiscovery object parameters.
    6. Click Create.

      The Local Storage Operator creates a local volume discovery instance named auto-discover-devices.

  2. To display a continuous list of available devices on a node:

    1. Log in to the OpenShift Container Platform web console.
    2. Navigate to Compute Nodes.
    3. Click the node name that you want to open. The "Node Details" page is displayed.
    4. Select the Disks tab to display the list of the selected devices.

      The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode.

  3. To automatically provision local volumes for the discovered devices from the web console:

    1. Navigate to Operators Installed Operators and select Local Storage from the list of Operators.
    2. Select Local Volume Set Create Local Volume Set.
    3. Enter a volume set name and a storage class name.
    4. Choose All nodes or Select nodes to apply filters accordingly.

      Note

      Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes.

    5. Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create.

      A message displays after several minutes, indicating that the "Operator reconciled successfully."

  4. Alternatively, to provision local volumes for the discovered devices from the CLI:

    1. Create an object YAML file to define the local volume set, such as local-volume-set.yaml, as shown in the following example:

      apiVersion: local.storage.openshift.io/v1alpha1
      kind: LocalVolumeSet
      metadata:
        name: example-autodetect
      spec:
        nodeSelector:
          nodeSelectorTerms:
            - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                    - worker-0
                    - worker-1
        storageClassName: local-sc 1
        volumeMode: Filesystem
        fsType: ext4
        maxDeviceCount: 10
        deviceInclusionSpec:
          deviceTypes: 2
            - disk
            - part
          deviceMechanicalProperties:
            - NonRotational
          minSize: 10G
          maxSize: 100G
          models:
            - SAMSUNG
            - Crucial_CT525MX3
          vendors:
            - ATA
            - ST2000LM
      1
      Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
      2
      When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices.
    2. Create the local volume set object:

      $ oc apply -f local-volume-set.yaml
    3. Verify that the local persistent volumes were dynamically provisioned based on the storage class:

      $ oc get pv

      Example output

      NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
      local-pv-1cec77cf   100Gi      RWO            Delete           Available           local-sc                88m
      local-pv-2ef7cd2a   100Gi      RWO            Delete           Available           local-sc                82m
      local-pv-3fa1c73    100Gi      RWO            Delete           Available           local-sc                48m

Note

Results are deleted after they are removed from the node. Symlinks must be manually removed.

4.12.2.7. Using tolerations with Local Storage Operator pods

Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes.

You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node.

Important

Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect. An operator allows you to leave one of these parameters empty.

Prerequisites

  • The Local Storage Operator is installed.
  • Local disks are attached to OpenShift Container Platform nodes with a taint.
  • Tainted nodes are expected to provision local storage.

Procedure

To configure local volumes for scheduling on tainted nodes:

  1. Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example:

      apiVersion: "local.storage.openshift.io/v1"
      kind: "LocalVolume"
      metadata:
        name: "local-disks"
        namespace: "openshift-local-storage"
      spec:
        tolerations:
          - key: localstorage 1
            operator: Equal 2
            value: "localstorage" 3
        storageClassDevices:
            - storageClassName: "local-sc"
              volumeMode: Block 4
              devicePaths: 5
                - /dev/xvdg
    1
    Specify the key that you added to the node.
    2
    Specify the Equal operator to require the key/value parameters to match. If operator is Exists, the system checks that the key exists and ignores the value. If operator is Equal, then the key and value must match.
    3
    Specify the value local of the tainted node.
    4
    The volume mode, either Filesystem or Block, defining the type of the local volumes.
    5
    The path containing a list of local storage devices to choose from.
  2. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example:

    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists

The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints.

4.12.2.8. Local Storage Operator Metrics

OpenShift Container Platform provides the following metrics for the Local Storage Operator:

  • lso_discovery_disk_count: total number of discovered devices on each node
  • lso_lvset_provisioned_PV_count: total number of PVs created by LocalVolumeSet objects
  • lso_lvset_unmatched_disk_count: total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria
  • lso_lvset_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolumeSet object criteria
  • lso_lv_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolume object criteria
  • lso_lv_provisioned_PV_count: total number of provisioned PVs for LocalVolume

To use these metrics, be sure to:

  • Enable support for monitoring when installing the Local Storage Operator.
  • When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace.

For more information about metrics, see Managing metrics.

4.12.2.9. Deleting the Local Storage Operator resources

4.12.2.9.1. Removing a local volume or local volume set

Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed.

Note

The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource.

Prerequisites

  • The persistent volume must be in a Released or Available state.

    Warning

    Deleting a persistent volume that is still in use can result in data loss or corruption.

Procedure

  1. Edit the previously created local volume to remove any unwanted disks.

    1. Edit the cluster resource:

      $ oc edit localvolume <name> -n openshift-local-storage
    2. Navigate to the lines under devicePaths, and delete any representing unwanted disks.
  2. Delete any persistent volumes created.

    $ oc delete pv <pv-name>
  3. Delete directory and included symlinks on the node.

    Warning

    The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability.

    $ oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1
    1
    The name of the storage class used to create the local volumes.
4.12.2.9.2. Uninstalling the Local Storage Operator

To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project.

Warning

Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator’s removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources.

Prerequisites

  • Access to the OpenShift Container Platform web console.

Procedure

  1. Delete any local volume resources installed in the project, such as localvolume, localvolumeset, and localvolumediscovery:

    $ oc delete localvolume --all --all-namespaces
    $ oc delete localvolumeset --all --all-namespaces
    $ oc delete localvolumediscovery --all --all-namespaces
  2. Uninstall the Local Storage Operator from the web console.

    1. Log in to the OpenShift Container Platform web console.
    2. Navigate to Operators Installed Operators.
    3. Type Local Storage into the filter box to locate the Local Storage Operator.
    4. Click the Options menu kebab at the end of the Local Storage Operator.
    5. Click Uninstall Operator.
    6. Click Remove in the window that appears.
  3. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command:

    $ oc delete pv <pv-name>
  4. Delete the openshift-local-storage project:

    $ oc delete project openshift-local-storage

4.12.3. Persistent storage using hostPath

A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node’s filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.

Important

The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node.

4.12.3.1. Overview

OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster.

In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning.

A hostPath volume must be provisioned statically.

Important

Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host. The following example shows the / directory from the host being mounted into the container at /host.

apiVersion: v1
kind: Pod
metadata:
  name: test-host-mount
spec:
  containers:
  - image: registry.access.redhat.com/ubi9/ubi
    name: test-container
    command: ['sh', '-c', 'sleep 3600']
    volumeMounts:
    - mountPath: /host
      name: host-slash
  volumes:
   - name: host-slash
     hostPath:
       path: /
       type: ''

4.12.3.2. Statically provisioning hostPath volumes

A pod that uses a hostPath volume must be referenced by manual (static) provisioning.

Procedure

  1. Define the persistent volume (PV) by creating a pv.yaml file with the PersistentVolume object definition:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: task-pv-volume 1
      labels:
        type: local
    spec:
      storageClassName: manual 2
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce 3
      persistentVolumeReclaimPolicy: Retain
      hostPath:
        path: "/mnt/data" 4
    1
    The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods.
    2
    Used to bind persistent volume claim (PVC) requests to the PV.
    3
    The volume can be mounted as read-write by a single node.
    4
    The configuration file specifies that the volume is at /mnt/data on the cluster’s node. To avoid corrupting your host system, do not mount to the container root, /, or any path that is the same in the host and the container. You can safely mount the host by using /host
  2. Create the PV from the file:

    $ oc create -f pv.yaml
  3. Define the PVC by creating a pvc.yaml file with the PersistentVolumeClaim object definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: task-pvc-volume
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: manual
  4. Create the PVC from the file:

    $ oc create -f pvc.yaml

4.12.3.3. Mounting the hostPath share in a privileged pod

After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.

Prerequisites

  • A persistent volume claim exists that is mapped to the underlying hostPath share.

Procedure

  • Create a privileged pod that mounts the existing persistent volume claim:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-name 1
    spec:
      containers:
        ...
        securityContext:
          privileged: true 2
        volumeMounts:
        - mountPath: /data 3
          name: hostpath-privileged
      ...
      securityContext: {}
      volumes:
        - name: hostpath-privileged
          persistentVolumeClaim:
            claimName: task-pvc-volume 4
    1
    The name of the pod.
    2
    The pod must run as privileged to access the node’s storage.
    3
    The path to mount the host path share inside the privileged pod. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    4
    The name of the PersistentVolumeClaim object that has been previously created.

4.12.4. Persistent storage using Logical Volume Manager Storage

Logical Volume Manager (LVM) Storage uses LVM2 through the TopoLVM CSI driver to dynamically provision local storage on a cluster with limited resources.

You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.

4.12.4.1. Logical Volume Manager Storage installation

You can install Logical Volume Manager (LVM) Storage on an OpenShift Container Platform cluster and configure it to dynamically provision storage for your workloads.

You can install LVM Storage by using the OpenShift Container Platform CLI (oc), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM).

Warning

When using LVM Storage on multi-node clusters, LVM Storage only supports provisioning local storage. LVM Storage does not support storage data replication mechanisms across nodes. You must ensure storage data replication through active or passive replication mechanisms to avoid a single point of failure.

4.12.4.1.1. Prerequisites to install LVM Storage

The prerequisites to install LVM Storage are as follows:

  • Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.
  • Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
  • Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.

    Note

    You cannot wipe the disks that are in use.

  • If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. See the "Installing LVM Storage using RHACM" section.
4.12.4.1.2. Installing LVM Storage by using the CLI

As a cluster administrator, you can install LVM Storage by using the OpenShift CLI.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to OpenShift Container Platform as a user with cluster-admin and Operator installation permissions.

Procedure

  1. Create a YAML file with the configuration for creating a namespace:

    Example YAML configuration for creating a namespace

    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        openshift.io/cluster-monitoring: "true"
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/audit: privileged
        pod-security.kubernetes.io/warn: privileged
      name: openshift-storage

  2. Create the namespace by running the following command:

    $ oc create -f <file_name>
  3. Create an OperatorGroup CR YAML file:

    Example OperatorGroup CR

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-storage-operatorgroup
      namespace: openshift-storage
    spec:
      targetNamespaces:
      - openshift-storage

  4. Create the OperatorGroup CR by running the following command:

    $ oc create -f <file_name>
  5. Create a Subscription CR YAML file:

    Example Subscription CR

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: lvms
      namespace: openshift-storage
    spec:
      installPlanApproval: Automatic
      name: lvms-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace

  6. Create the Subscription CR by running the following command:

    $ oc create -f <file_name>

Verification

  1. To verify that LVM Storage is installed, run the following command:

    $ oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    Name                         Phase
    4.13.0-202301261535          Succeeded

4.12.4.1.3. Installing LVM Storage by using the web console

You can install LVM Storage by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the cluster.
  • You have access to OpenShift Container Platform with cluster-admin and Operator installation permissions.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators OperatorHub.
  3. Click LVM Storage on the OperatorHub page.
  4. Set the following options on the Operator Installation page:

    1. Update Channel as stable-4.17.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If the openshift-storage namespace does not exist, it is created during the operator installation.
    4. Update approval as Automatic or Manual.

      Note

      If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically updates the running instance of LVM Storage without any intervention.

      If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must manually approve the update request to update LVM Storage to a newer version.

  5. Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
  6. Click Install.

Verification steps

  • Verify that LVM Storage shows a green tick, indicating successful installation.
4.12.4.1.4. Installing LVM Storage in a disconnected environment

You can install LVM Storage on OpenShift Container Platform in a disconnected environment. All sections referenced in this procedure are linked in the "Additional resources" section.

Prerequisites

  • You read the "About disconnected installation mirroring" section.
  • You have access to the OpenShift Container Platform image repository.
  • You created a mirror registry.

Procedure

  1. Follow the steps in the "Creating the image set configuration" procedure. To create an ImageSetConfiguration custom resource (CR) for LVM Storage, you can use the following example ImageSetConfiguration CR configuration:

    Example ImageSetConfiguration CR for LVM Storage

    kind: ImageSetConfiguration
    apiVersion: mirror.openshift.io/v1alpha2
    archiveSize: 4 1
    storageConfig: 2
      registry:
        imageURL: example.com/mirror/oc-mirror-metadata 3
        skipTLS: false
    mirror:
      platform:
        channels:
        - name: stable-4.17 4
          type: ocp
        graph: true 5
      operators:
      - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 6
        packages:
        - name: lvms-operator 7
          channels:
          - name: stable 8
      additionalImages:
      - name: registry.redhat.io/ubi9/ubi:latest 9
      helm: {}

    1
    Set the maximum size (in GiB) of each file within the image set.
    2
    Specify the location in which you want to save the image set. This location can be a registry or a local directory. You must configure the storageConfig field unless you are using the Technology Preview OCI feature.
    3
    Specify the storage URL for the image stream when using a registry. For more information, see Why use imagestreams.
    4
    Specify the channel from which you want to retrieve the OpenShift Container Platform images.
    5
    Set this field to true to generate the OpenShift Update Service (OSUS) graph image. For more information, see About the OpenShift Update Service.
    6
    Specify the Operator catalog from which you want to retrieve the OpenShift Container Platform images.
    7
    Specify the Operator packages to include in the image set. If this field is empty, all packages in the catalog are retrieved.
    8
    Specify the channels of the Operator packages to include in the image set. You must include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: $ oc mirror list operators --catalog=<catalog_name> --package=<package_name>.
    9
    Specify any additional images to include in the image set.
  2. Follow the procedure in the "Mirroring an image set to a mirror registry" section.
  3. Follow the procedure in the "Configuring image registry repository mirroring" section.
4.12.4.1.5. Installing LVM Storage by using RHACM

To install LVM Storage on the clusters by using Red Hat Advanced Cluster Management (RHACM), you must create a Policy custom resource (CR). You can also configure the criteria to select the clusters on which you want to install LVM Storage.

Note

The Policy CR that is created to install LVM Storage is also applied to the clusters that are imported or created after creating the Policy CR.

Prerequisites

  • You have access to the RHACM cluster using an account with cluster-admin and Operator installation permissions.
  • You have dedicated disks that LVM Storage can use on each cluster.
  • The cluster must be managed by RHACM.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Create a namespace.

    $ oc create ns <namespace>
  3. Create a Policy CR YAML file:

    Example Policy CR to install and configure LVM Storage

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-install-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector: 1
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-install-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-install-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: install-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: install-lvms
    spec:
      disabled: false
      remediationAction: enforce
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: install-lvms
          spec:
            object-templates:
            - complianceType: musthave
              objectDefinition: 2
                apiVersion: v1
                kind: Namespace
                metadata:
                  labels:
                    openshift.io/cluster-monitoring: "true"
                    pod-security.kubernetes.io/enforce: privileged
                    pod-security.kubernetes.io/audit: privileged
                    pod-security.kubernetes.io/warn: privileged
                  name: openshift-storage
            - complianceType: musthave
              objectDefinition: 3
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: musthave
              objectDefinition: 4
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms
                  namespace: openshift-storage
                spec:
                  installPlanApproval: Automatic
                  name: lvms-operator
                  source: redhat-operators
                  sourceNamespace: openshift-marketplace
            remediationAction: enforce
            severity: low

    1
    Set the key field and values field in PlacementRule.spec.clusterSelector to match the labels that are configured in the clusters on which you want to install LVM Storage.
    2
    Namespace configuration.
    3
    The OperatorGroup CR configuration.
    4
    The Subscription CR configuration.
  4. Create the Policy CR by running the following command:

    $ oc create -f <file_name> -n <namespace>

    Upon creating the Policy CR, the following custom resources are created on the clusters that match the selection criteria configured in the PlacementRule CR:

    • Namespace
    • OperatorGroup
    • Subscription

4.12.4.2. About the LVMCluster custom resource

You can configure the LVMCluster CR to perform the following actions:

  • Create LVM volume groups that you can use to provision persistent volume claims (PVCs).
  • Configure a list of devices that you want to add to the LVM volume groups.
  • Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group.
  • Force wipe the selected devices.

After you have installed LVM Storage, you must create an LVMCluster custom resource (CR).

Example LVMCluster CR YAML file

apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
  name: my-lvmcluster
spec:
  tolerations:
  - effect: NoSchedule
    key: xyz
    operator: Equal
    value: "true"
  storage:
    deviceClasses:
    - name: vg1
      fstype: ext4 1
      default: true
      nodeSelector: 2
        nodeSelectorTerms:
        - matchExpressions:
          - key: mykey
            operator: In
            values:
            - ssd
      deviceSelector: 3
        paths:
        - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
        - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
        optionalPaths:
        - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
        - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
        forceWipeDevicesAndDestroyAllData: true
      thinPoolConfig:
        name: thin-pool-1
        sizePercent: 90 4
        overprovisionRatio: 10
        chunkSize: 128Ki 5
        chunkSizeCalculationPolicy: Static 6

1 2 3 4 5 6
Optional field
Explanation of fields in the LVMCluster CR

The LVMCluster CR fields are described in the following table:

Table 4.5. LVMCluster CR fields
FieldTypeDescription

spec.storage.deviceClasses

array

Contains the configuration to assign the local storage devices to the LVM volume groups.

LVM Storage creates a storage class and volume snapshot class for each device class that you create.

deviceClasses.name

string

Specify a name for the LVM volume group (VG).

You can also configure this field to reuse a volume group that you created in the previous installation. For more information, see "Reusing a volume group from the previous LVM Storage installation".

deviceClasses.fstype

string

Set this field to ext4 or xfs. By default, this field is set to xfs.

deviceClasses.default

boolean

Set this field to true to indicate that a device class is the default. Otherwise, you can set it to false. You can only configure a single default device class.

deviceClasses.nodeSelector

object

Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.

On the control-plane node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster.

nodeSelector.nodeSelectorTerms

array

Configure the requirements that are used to select the node.

deviceClasses.deviceSelector

object

Contains the configuration to perform the following actions:

  • Specify the paths to the devices that you want to add to the LVM volume group.
  • Force wipe the devices that are added to the LVM volume group.

For more information, see "About adding devices to a volume group".

deviceSelector.paths

array

Specify the device paths.

If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state.

deviceSelector.optionalPaths

array

Specify the optional device paths.

If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.

deviceSelector. forceWipeDevicesAndDestroyAllData

boolean

LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.

To force wipe the selected devices, set this field to true. By default, this field is set to false.

Warning

If this field is set to true, LVM Storage wipes all previous data on the devices. Use this feature with caution.

Wiping the device can lead to inconsistencies in data integrity if any of the following conditions are met:

  • The device is being used as swap space.
  • The device is part of a RAID array.
  • The device is mounted.

If any of these conditions are true, do not force wipe the disk. Instead, you must manually wipe the disk.

deviceClasses.thinPoolConfig

object

Contains the configuration to create a thin pool in the LVM volume group.

If you exclude this field, logical volumes are thick provisioned.

Using thick-provisioned storage includes the following limitations:

  • No copy-on-write support for volume cloning.
  • No support for snapshot class.
  • No support for over-provisioning. As a result, the provisioned capacity of PersistentVolumeClaims (PVCs) is immediately reduced from the volume group.
  • No support for thin metrics. Thick-provisioned devices only support volume group metrics.

thinPoolConfig.name

string

Specify a name for the thin pool.

thinPoolConfig.sizePercent

integer

Specify the percentage of space in the LVM volume group for creating the thin pool.

By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90.

thinPoolConfig.overprovisionRatio

integer

Specify a factor by which you can provision additional storage based on the available storage in the thin pool.

For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool.

To disable over-provisioning, set this field to 1.

thinPoolConfig.chunkSize

integer

Specifies the statically calculated chunk size for the thin pool. This field is only used when the ChunkSizeCalculationPolicy field is set to Static. The value for this field must be configured in the range of 64 KiB to 1 GiB because of the underlying limitations of lvm2.

If you do not configure this field and the ChunkSizeCalculationPolicy field is set to Static, the default chunk size is set to 128 KiB.

For more information, see "Overview of chunk size".

thinPoolConfig.chunkSizeCalculationPolicy

string

Specifies the policy to calculate the chunk size for the underlying volume group. You can set this field to either Static or Host. By default, this field is set to Static.

If this field is set to Static, the chunk size is set to the value of the chunkSize field. If the chunkSize field is not configured, chunk size is set to 128 KiB.

If this field is set to Host, the chunk size is calculated based on the configuration in the lvm.conf file.

For more information, see "Limitations to configure the size of the devices used in LVM Storage".

4.12.4.2.1. Limitations to configure the size of the devices used in LVM Storage

The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:

  • The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
  • The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).

    • You can define the size of PE and LE during the physical and logical device creation.
    • The default PE and LE size is 4 MB.
    • If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.

The following tables describe the chunk size and volume size limits for static and host configurations:

Table 4.6. Tested configuration
ParameterValue

Chunk size

128 KiB

Maximum volume size

32 TiB

Table 4.7. Theoretical size limits for static configuration
ParameterMinimum valueMaximum value

Chunk size

64 KiB

1 GiB

Volume size

Minimum size of the underlying Red Hat Enterprise Linux CoreOS (RHCOS) system.

Maximum size of the underlying RHCOS system.

Table 4.8. Theoretical size limits for a host configuration
ParameterValue

Chunk size

This value is based on the configuration in the lvm.conf file. By default, this value is set to 128 KiB.

Maximum volume size

Equal to the maximum volume size of the underlying RHCOS system.

Minimum volume size

Equal to the minimum volume size of the underlying RHCOS system.

4.12.4.2.2. About adding devices to a volume group

The deviceSelector field in the LVMCluster CR contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group.

You can specify the device paths in the deviceSelector.paths field, the deviceSelector.optionalPaths field, or both. If you do not specify the device paths in both the deviceSelector.paths field and the deviceSelector.optionalPaths field, LVM Storage adds the supported unused devices to the volume group (VG).

You can add the path to the Redundant Array of Independent Disks (RAID) arrays in the deviceSelector field to integrate the RAID arrays with LVM Storage. You can create the RAID array by using the mdadm utility. LVM Storage does not support creating a software RAID.

Note

You can create a RAID array only during an OpenShift Container Platform installation. For information on creating a RAID array, see the following sections:

You can also add encrypted devices to the volume group. You can enable disk encryption on the cluster nodes during an OpenShift Container Platform installation. After encrypting a device, you can specify the path to the LUKS encrypted device in the deviceSelector field. For information on disk encryption, see "About disk encryption" and "Configuring disk encryption and mirroring".

The devices that you want to add to the VG must be supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".

LVM Storage adds the devices to the VG only if the following conditions are met:

  • The device path exists.
  • The device is supported by LVM Storage.
Important

After a device is added to the VG, you cannot remove the device.

LVM Storage supports dynamic device discovery. If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices to the VG when the devices are available.

Warning

It is not recommended to add the devices to the VG through dynamic device discovery due to the following reasons:

  • When you add a new device that you do not intend to add to the VG, LVM Storage automatically adds this device to the VG through dynamic device discovery.
  • If LVM Storage adds a device to the VG through dynamic device discovery, LVM Storage does not restrict you from removing the device from the node. Removing or updating the devices that are already added to the VG can disrupt the VG. This can also lead to data loss and necessitate manual node remediation.
4.12.4.2.3. Devices not supported by LVM Storage

When you are adding the device paths in the deviceSelector field of the LVMCluster custom resource (CR), ensure that the devices are supported by LVM Storage. If you add paths to the unsupported devices, LVM Storage excludes the devices to avoid complexity in managing logical volumes.

If you do not specify any device path in the deviceSelector field, LVM Storage adds only the unused devices that it supports.

Note

To get information about the devices, run the following command:

$ lsblk --paths --json -o \
NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE

LVM Storage does not support the following devices:

Read-only devices
Devices with the ro parameter set to true.
Suspended devices
Devices with the state parameter set to suspended.
ROM devices
Devices with the type parameter set to rom.
LVM partition devices
Devices with the type parameter set to lvm.
Devices with invalid partition labels
Devices with the partlabel parameter set to bios, boot, or reserved.
Devices with an invalid filesystem

Devices with the fstype parameter set to any value other than null or LVM2_member.

Important

LVM Storage supports devices with fstype parameter set to LVM2_member only if the devices do not contain children devices.

Devices that are part of another volume group

To get the information about the volume groups of the device, run the following command:

$ pvs <device-name> 1
1
Replace <device-name> with the device name.
Devices with bind mounts

To get the mount points of a device, run the following command:

$ cat /proc/1/mountinfo | grep <device-name> 1
1
Replace <device-name> with the device name.
Devices that contain children devices
Note

It is recommended to wipe the device before using it in LVM Storage to prevent unexpected behavior.

4.12.4.3. Ways to create an LVMCluster custom resource

You can create an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also create an LVMCluster CR by using RHACM.

Upon creating the LVMCluster CR, LVM Storage creates the following system-managed CRs:

  • A storageClass and volumeSnapshotClass for each device class.

    Note

    LVM Storage configures the name of the storage class and volume snapshot class in the format lvms-<device_class_name>, where, <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1, the name of the storage class and volume snapshot class is lvms-vg1.

  • LVMVolumeGroup: This CR is a specific type of persistent volume (PV) that is backed by an LVM volume group. It tracks the individual volume groups across multiple nodes.
  • LVMVolumeGroupNodeStatus: This CR tracks the status of the volume groups on a node.
4.12.4.3.1. Reusing a volume group from the previous LVM Storage installation

You can reuse an existing volume group (VG) from the previous LVM Storage installation instead of creating a new VG.

You can only reuse a VG but not the logical volume associated with the VG.

Important

You can perform this procedure only while creating an LVMCluster custom resource (CR).

Prerequisites

  • The VG that you want to reuse must not be corrupted.
  • The VG that you want to reuse must have the lvms tag. For more information on adding tags to LVM objects, see Grouping LVM objects with tags.

Procedure

  1. Open the LVMCluster CR YAML file.
  2. Configure the LVMCluster CR parameters as described in the following example:

    Example LVMCluster CR YAML file

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
    # ...
      storage:
        deviceClasses:
        - name: vg1  1
          fstype: ext4 2
          default: true
          deviceSelector: 3
    # ...
            forceWipeDevicesAndDestroyAllData: false 4
          thinPoolConfig: 5
    # ...
          nodeSelector: 6
    # ...

    1
    Set this field to the name of a VG from the previous LVM Storage installation.
    2
    Set this field to ext4 or xfs. By default, this field is set to xfs.
    3
    You can add new devices to the VG that you want to reuse by specifying the new device paths in the deviceSelector field. If you do not want to add new devices to the VG, ensure that the deviceSelector configuration in the current LVM Storage installation is same as that of the previous LVM Storage installation.
    4
    If this field is set to true, LVM Storage wipes all the data on the devices that are added to the VG.
    5
    To retain the thinPoolConfig configuration of the VG that you want to reuse, ensure that the thinPoolConfig configuration in the current LVM Storage installation is same as that of the previous LVM Storage installation. Otherwise, you can configure the thinPoolConfig field as required.
    6
    Configure the requirements to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
  3. Save the LVMCluster CR YAML file.
Note

To view the devices that are part a volume group, run the following command:

$ pvs -S vgname=<vg_name> 1
1
Replace <vg_name> with the name of the volume group.
4.12.4.3.2. Creating an LVMCluster CR by using the CLI

You can create an LVMCluster custom resource (CR) on a worker node using the OpenShift CLI (oc).

Important

You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to OpenShift Container Platform as a user with cluster-admin privileges.
  • You have installed LVM Storage.
  • You have installed a worker node in the cluster.
  • You read the "About the LVMCluster custom resource" section.

Procedure

  1. Create an LVMCluster custom resource (CR) YAML file:

    Example LVMCluster CR YAML file

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
    # ...
      storage:
        deviceClasses: 1
    # ...
          nodeSelector: 2
    # ...
          deviceSelector: 3
    # ...
          thinPoolConfig: 4
    # ...

    1
    Contains the configuration to assign the local storage devices to the LVM volume groups.
    2
    Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
    3
    Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group.
    4
    Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned.
  2. Create the LVMCluster CR by running the following command:

    $ oc create -f <file_name>

    Example output

    lvmcluster/lvmcluster created

Verification

  1. Check that the LVMCluster CR is in the Ready state:

    $ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace>

    Example output

    {"deviceClassStatuses": 1
    [
      {
        "name": "vg1",
        "nodeStatus": [ 2
            {
                "devices": [ 3
                    "/dev/nvme0n1",
                    "/dev/nvme1n1",
                    "/dev/nvme2n1"
                ],
                "node": "kube-node", 4
                "status": "Ready" 5
            }
        ]
      }
    ]
    "state":"Ready"} 6

    1
    The status of the device class.
    2
    The status of the LVM volume group on each node.
    3
    The list of devices used to create the LVM volume group.
    4
    The node on which the device class is created.
    5
    The status of the LVM volume group on the node.
    6
    The status of the LVMCluster CR.
    Note

    If the LVMCluster CR is in the Failed state, you can view the reason for failure in the status field.

    Example of status field with the reason for failue:

    status:
      deviceClassStatuses:
        - name: vg1
          nodeStatus:
            - node: my-node-1.example.com
              reason: no available devices found for volume group
              status: Failed
      state: Failed
  2. Optional: To view the storage classes created by LVM Storage for each device class, run the following command:

    $ oc get storageclass

    Example output

    NAME          PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    lvms-vg1      topolvm.io           Delete          WaitForFirstConsumer   true                   31m

  3. Optional: To view the volume snapshot classes created by LVM Storage for each device class, run the following command:

    $ oc get volumesnapshotclass

    Example output

    NAME          DRIVER               DELETIONPOLICY   AGE
    lvms-vg1      topolvm.io           Delete           24h

4.12.4.3.3. Creating an LVMCluster CR by using the web console

You can create an LVMCluster CR on a worker node using the OpenShift Container Platform web console.

Important

You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.

Prerequisites

  • You have access to the OpenShift Container Platform cluster with cluster-admin privileges.
  • You have installed LVM Storage.
  • You have installed a worker node in the cluster.
  • You read the "About the LVMCluster custom resource" section.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. In the openshift-storage namespace, click LVM Storage.
  4. Click Create LVMCluster and select either Form view or YAML view.
  5. Configure the required LVMCluster CR parameters.
  6. Click Create.
  7. Optional: If you want to edit the LVMCLuster CR, perform the following actions:

    1. Click the LVMCluster tab.
    2. From the Actions menu, select Edit LVMCluster.
    3. Click YAML and edit the required LVMCLuster CR parameters.
    4. Click Save.

Verification

  1. On the LVMCLuster page, check that the LVMCluster CR is in the Ready state.
  2. Optional: To view the available storage classes created by LVM Storage for each device class, click Storage StorageClasses.
  3. Optional: To view the available volume snapshot classes created by LVM Storage for each device class, click Storage VolumeSnapshotClasses.
4.12.4.3.4. Creating an LVMCluster CR by using RHACM

After you have installed LVM Storage by using RHACM, you must create an LVMCluster custom resource (CR).

Prerequisites

  • You have installed LVM Storage by using RHACM.
  • You have access to the RHACM cluster using an account with cluster-admin permissions.
  • You read the "About the LVMCluster custom resource" section.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Create a ConfigurationPolicy CR YAML file with the configuration to create an LVMCluster CR:

    Example ConfigurationPolicy CR YAML file to create an LVMCluster CR

    apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: lvms
    spec:
      object-templates:
      - complianceType: musthave
        objectDefinition:
          apiVersion: lvm.topolvm.io/v1alpha1
          kind: LVMCluster
          metadata:
            name: my-lvmcluster
            namespace: openshift-storage
          spec:
            storage:
              deviceClasses: 1
    # ...
                deviceSelector: 2
    # ...
                thinPoolConfig: 3
    # ...
                nodeSelector: 4
    # ...
      remediationAction: enforce
      severity: low

    1
    Contains the configuration to assign the local storage devices to the LVM volume groups.
    2
    Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group.
    3
    Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned.
    4
    Contains the configuration to choose the nodes on which you want to create the LVM volume groups. If this field is empty, then all nodes without no-schedule taints are considered.
  3. Create the ConfigurationPolicy CR by running the following command:

    $ oc create -f <file_name> -n <cluster_namespace> 1
    1
    Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.

4.12.4.4. Ways to delete an LVMCluster custom resource

You can delete an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also delete an LVMCluster CR by using RHACM.

Upon deleting the LVMCluster CR, LVM Storage deletes the following CRs:

  • storageClass
  • volumeSnapshotClass
  • LVMVolumeGroup
  • LVMVolumeGroupNodeStatus
4.12.4.4.1. Deleting an LVMCluster CR by using the CLI

You can delete the LVMCluster custom resource (CR) using the OpenShift CLI (oc).

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the LVMCluster CR by running the following command:

    $ oc delete lvmcluster <lvmclustername> -n openshift-storage

Verification

  • To verify that the LVMCluster CR has been deleted, run the following command:

    $ oc get lvmcluster -n <namespace>

    Example output

    No resources found in openshift-storage namespace.

4.12.4.4.2. Deleting an LVMCluster CR by using the web console

You can delete the LVMCluster custom resource (CR) using the OpenShift Container Platform web console.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators to view all the installed Operators.
  3. Click LVM Storage in the openshift-storage namespace.
  4. Click the LVMCluster tab.
  5. From the Actions, select Delete LVMCluster.
  6. Click Delete.

Verification

  • On the LVMCLuster page, check that the LVMCluster CR has been deleted.
4.12.4.4.3. Deleting an LVMCluster CR by using RHACM

If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can delete an LVMCluster CR by using RHACM.

Prerequisites

  • You have access to the RHACM cluster as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Delete the ConfigurationPolicy CR YAML file that was created for the LVMCluster CR:

    $ oc delete -f <file_name> -n <cluster_namespace> 1
    1
    Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.
  3. Create a Policy CR YAML file to delete the LVMCluster CR:

    Example Policy CR to delete the LVMCluster CR

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-lvmcluster-delete
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    spec:
      remediationAction: enforce
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-lvmcluster-removal
            spec:
              remediationAction: enforce 1
              severity: low
              object-templates:
                - complianceType: mustnothave
                  objectDefinition:
                    kind: LVMCluster
                    apiVersion: lvm.topolvm.io/v1alpha1
                    metadata:
                      name: my-lvmcluster
                      namespace: openshift-storage 2
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-lvmcluster-delete
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-policy-lvmcluster-delete
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: policy-lvmcluster-delete
    ---
    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-policy-lvmcluster-delete
    spec:
      clusterConditions:
        - status: "True"
          type: ManagedClusterConditionAvailable
      clusterSelector: 3
        matchExpressions:
          - key: mykey
            operator: In
            values:
              - myvalue

    1
    The spec.remediationAction in policy-template is overridden by the preceding parameter value for spec.remediationAction.
    2
    This namespace field must have the openshift-storage value.
    3
    Configure the requirements to select the clusters. LVM Storage is uninstalled on the clusters that match the selection criteria.
  4. Create the Policy CR by running the following command:

    $ oc create -f <file_name> -n <namespace>
  5. Create a Policy CR YAML file to check if the LVMCluster CR has been deleted:

    Example Policy CR to check if the LVMCluster CR has been deleted

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-lvmcluster-inform
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    spec:
      remediationAction: inform
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-lvmcluster-removal-inform
            spec:
              remediationAction: inform 1
              severity: low
              object-templates:
                - complianceType: mustnothave
                  objectDefinition:
                    kind: LVMCluster
                    apiVersion: lvm.topolvm.io/v1alpha1
                    metadata:
                      name: my-lvmcluster
                      namespace: openshift-storage 2
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-lvmcluster-check
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-policy-lvmcluster-check
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: policy-lvmcluster-inform
    ---
    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-policy-lvmcluster-check
    spec:
      clusterConditions:
        - status: "True"
          type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
          - key: mykey
            operator: In
            values:
              - myvalue

    1
    The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction.
    2
    The namespace field must have the openshift-storage value.
  6. Create the Policy CR by running the following command:

    $ oc create -f <file_name> -n <namespace>

Verification

  • Check the status of the Policy CRs by running the following command:

    $ oc get policy -n <namespace>

    Example output

    NAME                       REMEDIATION ACTION   COMPLIANCE STATE   AGE
    policy-lvmcluster-delete   enforce              Compliant          15m
    policy-lvmcluster-inform   inform               Compliant          15m

    Important

    The Policy CRs must be in Compliant state.

4.12.4.5. Provisioning storage

After you have created the LVM volume groups using the LVMCluster custom resource (CR), you can provision the storage by creating persistent volume claims (PVCs).

The following are the minimum storage sizes that you can request for each file system type:

  • block: 8 MiB
  • xfs: 300 MiB
  • ext4: 32 MiB

To create a PVC, you must create a PersistentVolumeClaim object.

Prerequisites

  • You have created an LVMCluster CR.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a PersistentVolumeClaim object:

    Example PersistentVolumeClaim object

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-block-1 1
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Block 2
      resources:
        requests:
          storage: 10Gi 3
        limits:
          storage: 20Gi 4
      storageClassName: lvms-vg1 5

    1
    Specify a name for the PVC.
    2
    To create a block PVC, set this field to Block. To create a file PVC, set this field to Filesystem.
    3
    Specify the storage size. If the value is less than the minimum storage size, the requested storage size is rounded to the minimum storage size. The total storage size you can provision is limited by the size of the Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
    4
    Optional: Specify the storage limit. Set this field to a value that is greater than or equal to the minimum storage size. Otherwise, PVC creation fails with an error.
    5
    The value of the storageClassName field must be in the format lvms-<device_class_name> where <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1, you must set the storageClassName field to lvms-vg1.
    Note

    The volumeBindingMode field of the storage class is set to WaitForFirstConsumer.

  3. Create the PVC by running the following command:

    # oc create -f <file_name> -n <application_namespace>
    Note

    The created PVCs remain in Pending state until you deploy the pods that use them.

Verification

  • To verify that the PVC is created, run the following command:

    $ oc get pvc -n <namespace>

    Example output

    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-block-1   Bound    pvc-e90169a8-fd71-4eea-93b8-817155f60e47   1Gi        RWO            lvms-vg1       5s

4.12.4.6. Ways to scale up the storage of clusters

OpenShift Container Platform supports additional worker nodes for clusters on bare metal user-provisioned infrastructure. You can scale up the storage of clusters either by adding new worker nodes with available storage or by adding new devices to the existing worker nodes.

Logical Volume Manager (LVM) Storage detects and uses additional worker nodes when the nodes become active.

To add a new device to the existing worker nodes on a cluster, you must add the path to the new device in the deviceSelector field of the LVMCluster custom resource (CR).

Important

You can add the deviceSelector field in the LVMCluster CR only while creating the LVMCluster CR. If you have not added the deviceSelector field while creating the LVMCluster CR, you must delete the LVMCluster CR and create a new LVMCluster CR containing the deviceSelector field.

If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available.

Note

LVM Storage adds only the supported devices. For information about unsupported devices, see "Devices not supported by LVM Storage".

4.12.4.6.1. Scaling up the storage of clusters by using the CLI

You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift CLI (oc).

Prerequisites

  • You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.
  • You have installed the OpenShift CLI (oc).
  • You have created an LVMCluster custom resource (CR).

Procedure

  1. Edit the LVMCluster CR by running the following command:

    $ oc edit <lvmcluster_file_name> -n <namespace>
  2. Add the path to the new device in the deviceSelector field.

    Example LVMCluster CR

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
      storage:
        deviceClasses:
    # ...
          deviceSelector: 1
            paths: 2
            - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
            optionalPaths: 3
            - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
    # ...

    1
    Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths, Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met:
    • The device path exists.
    • The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
    2
    Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state.
    3
    Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.
    Important

    After a device is added to the LVM volume group, it cannot be removed.

  3. Save the LVMCluster CR.
4.12.4.6.2. Scaling up the storage of clusters by using the web console

You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift Container Platform web console.

Prerequisites

  • You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.
  • You have created an LVMCluster custom resource (CR).

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. Click LVM Storage in the openshift-storage namespace.
  4. Click the LVMCluster tab to view the LVMCluster CR created on the cluster.
  5. From the Actions menu, select Edit LVMCluster.
  6. Click the YAML tab.
  7. Edit the LVMCluster CR to add the new device path in the deviceSelector field:

    Example LVMCluster CR

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
      storage:
        deviceClasses:
    # ...
          deviceSelector: 1
            paths: 2
            - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
            optionalPaths: 3
            - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
    # ...

    1
    Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths, Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met:
    • The device path exists.
    • The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
    2
    Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state.
    3
    Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.
    Important

    After a device is added to the LVM volume group, it cannot be removed.

  8. Click Save.
4.12.4.6.3. Scaling up the storage of clusters by using RHACM

You can scale up the storage capacity of worker nodes on the clusters by using RHACM.

Prerequisites

  • You have access to the RHACM cluster using an account with cluster-admin privileges.
  • You have created an LVMCluster custom resource (CR) by using RHACM.
  • You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.

Procedure

  1. Log in to the RHACM CLI using your OpenShift Container Platform credentials.
  2. Edit the LVMCluster CR that you created using RHACM by running the following command:

    $ oc edit -f <file_name> -ns <namespace> 1
    1
    Replace <file_name> with the name of the LVMCluster CR.
  3. In the LVMCluster CR, add the path to the new device in the deviceSelector field.

    Example LVMCluster CR

    apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: lvms
          spec:
            object-templates:
               - complianceType: musthave
                 objectDefinition:
                   apiVersion: lvm.topolvm.io/v1alpha1
                   kind: LVMCluster
                   metadata:
                     name: my-lvmcluster
                     namespace: openshift-storage
                   spec:
                     storage:
                       deviceClasses:
    # ...
                         deviceSelector: 1
                           paths: 2
                           - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
                           optionalPaths: 3
                           - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
    # ...

    1
    Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths, Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met:
    • The device path exists.
    • The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
    2
    Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state.
    3
    Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.
    Important

    After a device is added to the LVM volume group, it cannot be removed.

  4. Save the LVMCluster CR.

4.12.4.7. Expanding a persistent volume claim

After scaling up the storage of a cluster, you can expand the existing persistent volume claims (PVCs).

To expand a PVC, you must update the storage field in the PVC.

Prerequisites

  • Dynamic provisioning is used.
  • The StorageClass object associated with the PVC has the allowVolumeExpansion field set to true.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Update the value of the spec.resources.requests.storage field to a value that is greater than the current value by running the following command:

    $ oc patch <pvc_name> -n <application_namespace> -p \ 1
    '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}} --type=merge' 2
    1
    Replace <pvc_name> with the name of the PVC that you want to expand.
    2
    Replace <desired_size> with the new size to expand the PVC.

Verification

  • To verify that resizing is completed, run the following command:

    $ oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}

    LVM Storage adds the Resizing condition to the PVC during expansion. It deletes the Resizing condition after the PVC expansion.

4.12.4.8. Deleting a persistent volume claim

You can delete a persistent volume claim (PVC) by using the OpenShift CLI (oc).

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the PVC by running the following command:

    $ oc delete pvc <pvc_name> -n <namespace>

Verification

  • To verify that the PVC is deleted, run the following command:

    $ oc get pvc -n <namespace>

    The deleted PVC must not be present in the output of this command.

4.12.4.9. About volume snapshots

You can create snapshots of persistent volume claims (PVCs) that are provisioned by LVM Storage.

You can perform the following actions using the volume snapshots:

  • Back up your application data.

    Important

    Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you must move the snapshots to a secure location. You can use OpenShift API for Data Protection (OADP) backup and restore solutions. For information about OADP, see "OADP features".

  • Revert to a state at which the volume snapshot was taken.
Note

You can also create volume snapshots of the volume clones.

4.12.4.9.1. Limitations for creating volume snapshots in multi-node topology

LVM Storage has the following limitations for creating volume snapshots in multi-node topology:

  • Creating volume snapshots is based on the LVM thin pool capabilities.
  • After creating a volume snapshot, the node must have additional storage space for further updating the original data source.
  • You can create volume snapshots only on the node where you have deployed the original data source.
  • Pods relying on the PVC that uses the snapshot data can be scheduled only on the node where you have deployed the original data source.

Additional resources

4.12.4.9.2. Creating volume snapshots

You can create volume snapshots based on the available capacity of the thin pool and the over-provisioning limits. To create a volume snapshot, you must create a VolumeSnapshotClass object.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot.
  • You stopped all the I/O to the PVC.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a VolumeSnapshot object:

    Example VolumeSnapshot object

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: lvm-block-1-snap 1
    spec:
      source:
        persistentVolumeClaimName: lvm-block-1 2
      volumeSnapshotClassName: lvms-vg1 3

    1
    Specify a name for the volume snapshot.
    2
    Specify the name of the source PVC. LVM Storage creates a snapshot of this PVC.
    3
    Set this field to the name of a volume snapshot class.
    Note

    To get the list of available volume snapshot classes, run the following command:

    $ oc get volumesnapshotclass
  3. Create the volume snapshot in the namespace where you created the source PVC by running the following command:

    $ oc create -f <file_name> -n <namespace>

    LVM Storage creates a read-only copy of the PVC as a volume snapshot.

Verification

  • To verify that the volume snapshot is created, run the following command:

    $ oc get volumesnapshot -n <namespace>

    Example output

    NAME               READYTOUSE   SOURCEPVC     SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                                    CREATIONTIME   AGE
    lvm-block-1-snap   true         lvms-test-1                           1Gi           lvms-vg1        snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91   19s            19s

    The value of the READYTOUSE field for the volume snapshot that you created must be true.

4.12.4.9.3. Restoring volume snapshots

To restore a volume snapshot, you must create a persistent volume claim (PVC) with the dataSource.name field set to the name of the volume snapshot.

The restored PVC is independent of the volume snapshot and the source PVC.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have created a volume snapshot.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a PersistentVolumeClaim object with the configuration to restore the volume snapshot:

    Example PersistentVolumeClaim object to restore a volume snapshot

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: lvm-block-1-restore
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Block
      Resources:
        Requests:
          storage: 2Gi 1
      storageClassName: lvms-vg1 2
      dataSource:
        name: lvm-block-1-snap 3
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io

    1
    Specify the storage size of the restored PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot.
    2
    Set this field to the value of the storageClassName field in the source PVC of the volume snapshot that you want to restore.
    3
    Set this field to the name of the volume snapshot that you want to restore.
  3. Create the PVC in the namespace where you created the volume snapshot by running the following command:

    $ oc create -f <file_name> -n <namespace>

Verification

  • To verify that the volume snapshot is restored, run the following command:

    $ oc get pvc -n <namespace>

    Example output

    NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-block-1-restore   Bound    pvc-e90169a8-fd71-4eea-93b8-817155f60e47   1Gi        RWO            lvms-vg1       5s

4.12.4.9.4. Deleting volume snapshots

You can delete the volume snapshots of the persistent volume claims (PVCs).

Important

When you delete a persistent volume claim (PVC), LVM Storage deletes only the PVC, but not the snapshots of the PVC.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have ensured that the volume snpashot that you want to delete is not in use.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the volume snapshot by running the following command:

    $ oc delete volumesnapshot <volume_snapshot_name> -n <namespace>

Verification

  • To verify that the volume snapshot is deleted, run the following command:

    $ oc get volumesnapshot -n <namespace>

    The deleted volume snapshot must not be present in the output of this command.

4.12.4.10. About volume clones

A volume clone is a duplicate of an existing persistent volume claim (PVC). You can create a volume clone to make a point-in-time copy of the data.

4.12.4.10.1. Limitations for creating volume clones in multi-node topology

LVM Storage has the following limitations for creating volume clones in multi-node topology:

  • Creating volume clones is based on the LVM thin pool capabilities.
  • The node must have additional storage after creating a volume clone for further updating the original data source.
  • You can create volume clones only on the node where you have deployed the original data source.
  • Pods relying on the PVC that uses the clone data can be scheduled only on the node where you have deployed the original data source.
4.12.4.10.2. Creating volume clones

To create a clone of a persistent volume claim (PVC), you must create a PersistentVolumeClaim object in the namespace where you created the source PVC.

Important

The cloned PVC has write access.

Prerequisites

  • You ensured that the source PVC is in Bound state. This is required for a consistent clone.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Create a PersistentVolumeClaim object:

    Example PersistentVolumeClaim object to create a volume clone

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: lvm-pvc-clone
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: lvms-vg1 1
      volumeMode: Filesystem 2
      dataSource:
        kind: PersistentVolumeClaim
        name: lvm-pvc 3
      resources:
        requests:
          storage: 1Gi 4

    1
    Set this field to the value of the storageClassName field in the source PVC.
    2
    Set this field to the volumeMode field in the source PVC.
    3
    Specify the name of the source PVC.
    4
    Specify the storage size for the cloned PVC. The storage size of the cloned PVC must be greater than or equal to the storage size of the source PVC.
  3. Create the PVC in the namespace where you created the source PVC by running the following command:

    $ oc create -f <file_name> -n <namespace>

Verification

  • To verify that the volume clone is created, run the following command:

    $ oc get pvc -n <namespace>

    Example output

    NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-block-1-clone   Bound    pvc-e90169a8-fd71-4eea-93b8-817155f60e47   1Gi        RWO            lvms-vg1       5s

4.12.4.10.3. Deleting volume clones

You can delete volume clones.

Important

When you delete a persistent volume claim (PVC), LVM Storage deletes only the source persistent volume claim (PVC) but not the clones of the PVC.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the cloned PVC by running the following command:

    # oc delete pvc <clone_pvc_name> -n <namespace>

Verification

  • To verify that the volume clone is deleted, run the following command:

    $ oc get pvc -n <namespace>

    The deleted volume clone must not be present in the output of this command.

4.12.4.11. Updating LVM Storage

You can update LVM Storage to ensure compatibility with the OpenShift Container Platform version.

Prerequisites

  • You have updated your OpenShift Container Platform cluster.
  • You have installed a previous version of LVM Storage.
  • You have installed the OpenShift CLI (oc).
  • You have access to the cluster using an account with cluster-admin permissions.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Update the Subscription custom resource (CR) that you created while installing LVM Storage by running the following command:

    $ oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{"spec":{"channel":"<update_channel>"}}' 1
    1
    Replace <update_channel> with the version of LVM Storage that you want to install. For example, stable-4.17.
  3. View the update events to check that the installation is complete by running the following command:

    $ oc get events -n openshift-storage

    Example output

    ...
    8m13s       Normal    RequirementsUnknown   clusterserviceversion/lvms-operator.v4.17   requirements not yet checked
    8m11s       Normal    RequirementsNotMet    clusterserviceversion/lvms-operator.v4.17   one or more requirements couldn't be found
    7m50s       Normal    AllRequirementsMet    clusterserviceversion/lvms-operator.v4.17   all requirements found, attempting install
    7m50s       Normal    InstallSucceeded      clusterserviceversion/lvms-operator.v4.17   waiting for install components to report healthy
    7m49s       Normal    InstallWaiting        clusterserviceversion/lvms-operator.v4.17   installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" waiting for 1 outdated replica(s) to be terminated
    7m39s       Normal    InstallSucceeded      clusterserviceversion/lvms-operator.v4.17   install strategy completed with no errors
    ...

Verification

  • Verify the LVM Storage version by running the following command:

    $ oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'

    Example output

    lvms-operator.v4.17

4.12.4.12. Monitoring LVM Storage

To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:

openshift.io/cluster-monitoring=true
Important

For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics.

4.12.4.12.1. Metrics

You can monitor LVM Storage by viewing the metrics.

The following table describes the topolvm metrics:

Table 4.9. topolvm metrics
AlertDescription

topolvm_thinpool_data_percent

Indicates the percentage of data space used in the LVM thinpool.

topolvm_thinpool_metadata_percent

Indicates the percentage of metadata space used in the LVM thinpool.

topolvm_thinpool_size_bytes

Indicates the size of the LVM thin pool in bytes.

topolvm_volumegroup_available_bytes

Indicates the available space in the LVM volume group in bytes.

topolvm_volumegroup_size_bytes

Indicates the size of the LVM volume group in bytes.

topolvm_thinpool_overprovisioned_available

Indicates the available over-provisioned size of the LVM thin pool in bytes.

Note

Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.

4.12.4.12.2. Alerts

When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.

LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:

Table 4.10. LVM Storage alerts
AlertDescription

VolumeGroupUsageAtThresholdNearFull

This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required.

VolumeGroupUsageAtThresholdCritical

This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required.

ThinPoolDataUsageAtThresholdNearFull

This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolDataUsageAtThresholdCritical

This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdNearFull

This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdCritical

This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

4.12.4.13. Uninstalling LVM Storage by using the CLI

You can uninstall LVM Storage by using the OpenShift CLI (oc).

Prerequisites

  • You have logged in to oc as a user with cluster-admin permissions.
  • You deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
  • You deleted the LVMCluster custom resource (CR).

Procedure

  1. Get the currentCSV value for the LVM Storage Operator by running the following command:

    $ oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV

    Example output

    currentCSV: lvms-operator.v4.15.3

  2. Delete the subscription by running the following command:

    $ oc delete subscription.operators.coreos.com lvms-operator -n <namespace>

    Example output

    subscription.operators.coreos.com "lvms-operator" deleted

  3. Delete the CSV for the LVM Storage Operator in the target namespace by running the following command:

    $ oc delete clusterserviceversion <currentCSV> -n <namespace> 1
    1
    Replace <currentCSV> with the currentCSV value for the LVM Storage Operator.

    Example output

    clusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deleted

Verification

  • To verify that the LVM Storage Operator is uninstalled, run the following command:

    $ oc get csv -n <namespace>

    If the LVM Storage Operator was successfully uninstalled, it does not appear in the output of this command.

4.12.4.14. Uninstalling LVM Storage by using the web console

You can uninstall LVM Storage using the OpenShift Container Platform web console.

Prerequisites

  • You have access to OpenShift Container Platform as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
  • You have deleted the LVMCluster custom resource (CR).

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. Click LVM Storage in the openshift-storage namespace.
  4. Click the Details tab.
  5. From the Actions menu, select Uninstall Operator.
  6. Optional: When prompted, select the Delete all operand instances for this operator checkbox to delete the operand instances for LVM Storage.
  7. Click Uninstall.

4.12.4.15. Uninstalling LVM Storage installed using RHACM

To uninstall LVM Storage that you installed using RHACM, you must delete the RHACM Policy custom resource (CR) that you created for installing and configuring LVM Storage.

Prerequisites

  • You have access to the RHACM cluster as a user with cluster-admin permissions.
  • You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
  • You have deleted the LVMCluster CR that you created using RHACM.

Procedure

  1. Log in to the OpenShift CLI (oc).
  2. Delete the RHACM Policy CR that you created for installing and configuring LVM Storage by using the following command:

    $ oc delete -f <policy> -n <namespace> 1
    1
    Replace <policy> with the name of the Policy CR YAML file.
  3. Create a Policy CR YAML file with the configuration to uninstall LVM Storage:

    Example Policy CR to uninstall LVM Storage

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-uninstall-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-uninstall-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-uninstall-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: uninstall-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: uninstall-lvms
    spec:
      disabled: false
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: uninstall-lvms
          spec:
            object-templates:
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: v1
                kind: Namespace
                metadata:
                  name: openshift-storage
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms-operator
                  namespace: openshift-storage
            remediationAction: enforce
            severity: low
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: policy-remove-lvms-crds
          spec:
            object-templates:
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: logicalvolumes.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmclusters.lvm.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmvolumegroupnodestatuses.lvm.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmvolumegroups.lvm.topolvm.io
            remediationAction: enforce
            severity: high

  4. Create the Policy CR by running the following command:

    $ oc create -f <policy> -ns <namespace>

4.12.4.16. Downloading log files and diagnostic information using must-gather

When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.

Procedure

  • Run the must-gather command from the client connected to the LVM Storage cluster:

    $ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.17 --dest-dir=<directory_name>

Additional resources

4.12.4.17. Troubleshooting persistent storage

While configuring persistent storage using Logical Volume Manager (LVM) Storage, you can encounter several issues that require troubleshooting.

4.12.4.17.1. Investigating a PVC stuck in the Pending state

A persistent volume claim (PVC) can get stuck in the Pending state for the following reasons:

  • Insufficient computing resources.
  • Network problems.
  • Mismatched storage class or node selector.
  • No available persistent volumes (PVs).
  • The node with the PV is in the Not Ready state.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  1. Retrieve the list of PVCs by running the following command:

    $ oc get pvc

    Example output

    NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvms-test   Pending                                      lvms-vg1       11s

  2. Inspect the events associated with a PVC stuck in the Pending state by running the following command:

    $ oc describe pvc <pvc_name> 1
    1
    Replace <pvc_name> with the name of the PVC. For example, lvms-vg1.

    Example output

    Type     Reason              Age               From                         Message
    ----     ------              ----              ----                         -------
    Warning  ProvisioningFailed  4s (x2 over 17s)  persistentvolume-controller  storageclass.storage.k8s.io "lvms-vg1" not found

4.12.4.17.2. Recovering from a missing storage class

If you encounter the storage class not found error, check the LVMCluster custom resource (CR) and ensure that all the Logical Volume Manager (LVM) Storage pods are in the Running state.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  1. Verify that the LVMCluster CR is present by running the following command:

    $ oc get lvmcluster -n openshift-storage

    Example output

    NAME            AGE
    my-lvmcluster   65m

  2. If the LVMCluster CR is not present, create an LVMCluster CR. For more information, see "Ways to create an LVMCluster custom resource".
  3. In the openshift-storage namespace, check that all the LVM Storage pods are in the Running state by running the following command:

    $ oc get pods -n openshift-storage

    Example output

    NAME                                  READY   STATUS    RESTARTS      AGE
    lvms-operator-7b9fb858cb-6nsml        3/3     Running   0             70m
    topolvm-controller-5dd9cf78b5-7wwr2   5/5     Running   0             66m
    topolvm-node-dr26h                    4/4     Running   0             66m
    vg-manager-r6zdv                      1/1     Running   0             66m

    The output of this command must contain a running instance of the following pods:

    • lvms-operator
    • vg-manager

      If the vg-manager pod is stuck while loading a configuration file, it is due to a failure to locate an available disk for LVM Storage to use. To retrieve the necessary information to troubleshoot this issue, review the logs of the vg-manager pod by running the following command:

      $ oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage
4.12.4.17.3. Recovering from node failure

A persistent volume claim (PVC) can be stuck in the Pending state due to a node failure in the cluster.

To identify the failed node, you can examine the restart count of the topolvm-node pod. An increased restart count indicates potential problems with the underlying node, which might require further investigation and troubleshooting.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  • Examine the restart count of the topolvm-node pod instances by running the following command:

    $ oc get pods -n openshift-storage

    Example output

    NAME                                  READY   STATUS    RESTARTS      AGE
    lvms-operator-7b9fb858cb-6nsml        3/3     Running   0             70m
    topolvm-controller-5dd9cf78b5-7wwr2   5/5     Running   0             66m
    topolvm-node-dr26h                    4/4     Running   0             66m
    topolvm-node-54as8                    4/4     Running   0             66m
    topolvm-node-78fft                    4/4     Running   17 (8s ago)   66m
    vg-manager-r6zdv                      1/1     Running   0             66m
    vg-manager-990ut                      1/1     Running   0             66m
    vg-manager-an118                      1/1     Running   0             66m

Next steps

  • If the PVC is stuck in the Pending state even after you have resolved any issues with the node, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".

Additional resources

4.12.4.17.4. Recovering from disk failure

If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there can be a problem with the underlying volume or disk.

Disk and volume provisioning issues result with a generic error message such as Failed to provision volume with storage class <storage_class_name>. The generic error message is followed by a specific volume failure error message.

The following table describes the volume failure error messages:

Table 4.11. Volume failure error messages
Error messageDescription

Failed to check volume existence

Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures.

Failed to bind volume

Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC.

FailedMount or FailedAttachVolume

This error indicates problems when trying to mount the volume to a node. If the disk has failed, this error can appear when a pod tries to use the PVC.

FailedUnMount

This error indicates problems when trying to unmount a volume from a node. If the disk has failed, this error can appear when a pod tries to use the PVC.

Volume is already exclusively attached to one node and cannot be attached to another

This error can appear with storage solutions that do not support ReadWriteMany access modes.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure

  1. Inspect the events associated with a PVC by running the following command:

    $ oc describe pvc <pvc_name> 1
    1
    Replace <pvc_name> with the name of the PVC.
  2. Establish a direct connection to the host where the problem is occurring.
  3. Resolve the disk issue.

Next steps

  • If the volume failure messages persist or recur even after you have resolved the issue with the disk, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".

Additional resources

4.12.4.17.5. Performing a forced clean-up

If the disk or node-related problems persist even after you have completed the troubleshooting procedures, you must perform a forced clean-up. A forced clean-up is used to address persistent issues and ensure the proper functioning of Logical Volume Manager (LVM) Storage.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.
  • You have deleted all the persistent volume claims (PVCs) that were created by using LVM Storage.
  • You have stopped the pods that are using the PVCs that were created by using LVM Storage.

Procedure

  1. Switch to the openshift-storage namespace by running the following command:

    $ oc project openshift-storage
  2. Check if the LogicalVolume custom resources (CRs) are present by running the following command:

    $ oc get logicalvolume
    1. If the LogicalVolume CRs are present, delete them by running the following command:

      $ oc delete logicalvolume <name> 1
      1
      Replace <name> with the name of the LogicalVolume CR.
    2. After deleting the LogicalVolume CRs, remove their finalizers by running the following command:

      $ oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1
      1
      Replace <name> with the name of the LogicalVolume CR.
  3. Check if the LVMVolumeGroup CRs are present by running the following command:

    $ oc get lvmvolumegroup
    1. If the LVMVolumeGroup CRs are present, delete them by running the following command:

      $ oc delete lvmvolumegroup <name> 1
      1
      Replace <name> with the name of the LVMVolumeGroup CR.
    2. After deleting the LVMVolumeGroup CRs, remove their finalizers by running the following command:

      $ oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1
      1
      Replace <name> with the name of the LVMVolumeGroup CR.
  4. Delete any LVMVolumeGroupNodeStatus CRs by running the following command:

    $ oc delete lvmvolumegroupnodestatus --all
  5. Delete the LVMCluster CR by running the following command:

    $ oc delete lvmcluster --all
    1. After deleting the LVMCluster CR, remove its finalizer by running the following command:

      $ oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1
      1
      Replace <name> with the name of the LVMCluster CR.
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.