Chapter 2. Using dynamic storage with the LVMS plugin


By using dynamic provisioning, you can create storage volumes on-demand to eliminate the need for pre-provisioned storage.

MicroShift enables dynamic storage provisioning that is ready for immediate use with the logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. The LVMS plugin is the Red Hat downstream version of TopoLVM, a CSI plugin for managing logical volume management (LVM) logical volumes (LVs) for Kubernetes.

LVMS provisions new LVM logical volumes for container workloads with appropriately configured persistent volume claims (PVCs). Each PVC references a storage class that represents an LVM Volume Group (VG) on the host node. LVs are only provisioned for scheduled pods.

2.1. LVMS system requirements

To prepare your infrastructure for storage operations, review the system specifications for using LVMS in MicroShift. Verifying these requirements ensures your environment meets the necessary resource standards for a successful deployment.

2.1.1. Volume group name

If you did not configure LVMS in an lvmd.yaml file placed in the /etc/microshift/ directory, MicroShift attempts to assign a default volume group (VG) dynamically by running the vgs command.

  • MicroShift assigns a default VG when only one VG is found.
  • If more than one VG is present, the VG named microshift is assigned as the default.
  • If a VG named microshift does not exist, LVMS is not deployed.

If there are no volume groups on the MicroShift host, LVMS is disabled.

If you want to use a specific VG, LVMS must be configured to select that VG. You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.

You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.

After MicroShift starts, you can update the lvmd.yaml to include or remove VGs. To implement changes, you must restart MicroShift. If the lvmd.yaml is deleted, MicroShift attempts to find a default VG again.

2.1.2. Volume size increments

The LVMS provisions storage in increments of 1 gigabyte (GB). Storage requests are rounded up to the nearest GB. When the capacity of a VG is less than 1 GB, the PersistentVolumeClaim registers a ProvisioningFailed event, for example:

Example output

Warning  ProvisioningFailed    3s (x2 over 5s)  topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with
StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)
Copy to Clipboard Toggle word wrap

To reduce the use of runtime resources, such as RAM, CPU, and storage, remove or disable the LVMS CSI provider and CSI snapshot deployments. This configuration optimizes system performance by eliminating storage components that are not required for your specific workload.

Note

You can configure MicroShift to disable CSI provider and CSI snapshot only before installing and running MicroShift. After MicroShift is installed and running, you must update the configuration file and uninstall the components.

To reduce the use of runtime resources, you can remove or disable the following storage components:

  • You can configure MicroShift to disable the built-in logical volume manager storage (LVMS) Container Storage Interface (CSI) provider.
  • You can configure MicroShift to disable the Container Storage Interface (CSI) snapshot capabilities.
  • You can uninstall the installed CSI implementations using oc commands.
Important

Automated uninstallation is not supported as this can cause orphaning of the provisioned volumes. Without the LVMS CSI driver, the node does not detect the underlying storage interface and cannot perform provisioning and deprovisioning or mounting and unmounting operations.

To prevent the installation of CSI implementation pods, disable the deployments that run CSI snapshot implementations. This configuration conserves system resources by ensuring that snapshot components are not deployed when they are not required.

Important

Use the procedure if you are defining the configuration file before installing and running MicroShift. If MicroShift is already started, the CSI snapshot implementation will be running. You must manually remove the implementation by following the uninstallation instructions.

Note

MicroShift does not delete CSI snapshot implementation pods. You must configure MicroShift to disable installation of the CSI snapshot implementation pods during the startup process.

Procedure

  1. Disable installation of the CSI snapshot controller by entering the optionalCsiComponents value under the storage section of the MicroShift configuration file in /etc/microshift/config.yaml:

    # ...
      storage: {}
    # ...
    Copy to Clipboard Toggle word wrap

    where

    storage: Specifies the storage details. You can choose to not define optionalCsiComponents. If you do specify the optionalCsiComponents field, valid values include: an empty value ([]) or a single empty string element ([""]), snapshot-controller, or none. A value of none is mutually exclusive with all other values.

    Note

    If the optionalCsiComponents value is empty or null, MicroShift defaults to deploying snapshot-controller.

  2. After the optionalCsiComponents field is specified with a supported value in the config.yaml, start MicroShift by running the following command:

    $ sudo systemctl start microshift
    Copy to Clipboard Toggle word wrap
    Note

    MicroShift does not redeploy the disabled components after a restart.

You can disable installation of the CSI implementation pods. MicroShift does not delete CSI driver implementation pods. You must configure MicroShift to disable installation of the CSI driver implementation pods during the startup process.

Important

This procedure is for defining the configuration file before installing and running MicroShift. If MicroShift is already started, then the CSI driver implementation is running. You must manually remove it by following the uninstallation instructions.

Procedure

  1. Disable installation of the CSI driver by entering the driver value under the storage section of the MicroShift configuration file in /etc/microshift/config.yaml:

    # ...
      storage
       driver:
       - "none"
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    storage.driver.none

    Specifies the driver to disable. Valid values are none or lvms.

    Note

    By default, the driver value is empty or null and LVMS is deployed.

  2. Start MicroShift after the driver field is specified with a supported value in the /etc/microshift/config.yaml file by running the following command:

    $ sudo systemctl enable --now microshift
    Copy to Clipboard Toggle word wrap
    Note

    MicroShift does not redeploy the disabled components after a restart operation.

2.5. Uninstalling the CSI snapshot implementation

To remove the Container Storage Interface (CSI) snapshot capability from your cluster, uninstall the CSI snapshot implementation.

Prerequisites

  • MicroShift is installed and running.
  • The CSI snapshot implementation is deployed on the MicroShift node.

Procedure

  • Uninstall the CSI snapshot implementation by running the following command:

    $ oc delete -n kube-system deployment.apps/snapshot-controller
    Copy to Clipboard Toggle word wrap

    Example output

    deployment.apps "snapshot-controller" deleted
    Copy to Clipboard Toggle word wrap

2.6. Uninstalling the CSI driver implementation

To remove the Container Storage Interface (CSI) integration from your cluster, uninstall the CSI driver implementation.

Prerequisites

  • MicroShift is installed and running.
  • The CSI driver implementation is deployed on the MicroShift node.

Procedure

  1. Delete the lvmclusters object by running the following command:

    $ oc delete -n openshift-storage lvmclusters.lvm.topolvm.io/lvms
    Copy to Clipboard Toggle word wrap

    Example output

    lvmcluster.lvm.topolvm.io "lvms" deleted
    Copy to Clipboard Toggle word wrap

  2. Delete the lvms-operator by running the following command:

    $ oc delete -n openshift-storage deployment.apps/lvms-operator
    Copy to Clipboard Toggle word wrap

    Example output

    deployment.apps "lvms-operator" deleted
    Copy to Clipboard Toggle word wrap

  3. Delete the topolvm-provisioner StorageClass by running the following command:

    $ oc delete storageclasses.storage.k8s.io/topolvm-provisioner
    Copy to Clipboard Toggle word wrap

    Example output

    storageclass.storage.k8s.io "topolvm-provisioner" deleted
    Copy to Clipboard Toggle word wrap

2.7. LVMS deployment

To ensure local storage is ready for use, MicroShift automatically deploys LVMS into the openshift-storage namespace at startup. This automated process prepares the node for storage operations immediately, eliminating the need for manual installation.

LVMS uses StorageCapacity tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the free storage of the volume group. For more information about StorageCapacity tracking, see "Storage Capacity".

Additional resources

To ensure your devices are compatible with storage operations, review the size configuration limitations in LVM Storage. Adhering to these constraints prevents provisioning failures by ensuring selected devices meet the required capacity specifications.

When provisioning storage by using LVM Storage, the following factors limit device size:

  • The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
  • The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).

    • You can define the size of PE and LE during the physical and logical device creation.
    • The default PE and LE size is 4 MiB.
    • If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
    • The size limit for Red Hat Enterprise Linux (RHEL) 9 by using the default PE and LE size is 8 EB.
    • The following are the minimum storage sizes that you can request for each file system type:

      • block: 8 MiB
      • xfs: 300 MiB
      • ext4: 32 MiB

2.9. Creating an LVMS configuration file

To customize storage settings, create an LVMS configuration file named lvmd.yaml. You must place this file in the /etc/microshift/ directory to ensure MicroShift detects and applies your configuration at startup.

Procedure

  • To create the lvmd.yaml configuration file, run the following command:

    $ sudo cp /etc/microshift/lvmd.yaml.default /etc/microshift/lvmd.yaml
    Copy to Clipboard Toggle word wrap

2.10. Basic LVMS configuration example

To customize storage operations, pass through your LVM configuration to MicroShift. With this flexibility, you can define custom volume groups, thin volume provisioning parameters, and reserved unallocated space by editing the LVMS configuration file.

You must restart MicroShift to deploy configuration changes after editing the file.

Note

If you need to take volume snapshots, you must use thin provisioning in your lvmd.conf file. If you do not need to take volume snapshots, you can use thick volumes.

The following lvmd.yaml example file shows a basic LVMS configuration:

LVMS configuration example

socket-name:
device-classes:
  - name: "default"
    volume-group: "VGNAMEHERE"
    spare-gb: 0 
1

    default: 
2
Copy to Clipboard Toggle word wrap

+ where:

+

socket-name
Specifies the UNIX domain socket endpoint of gRPC. Defaults to /run/lvmd/lvmd.socket. Takes a string value.
device-classes
Specifies a list of maps for the settings for each device-class.
device-classes.name
Specifies the name of the device-class. Takes a string value.
device-classes.volume-group
Specifies the group where the device-class creates the logical volumes. Takes a string value.
device-classes.spare-gb
Specifies the storage capacity in GB to be left unallocated in the volume group. Defaults to 0. Takes an unsigned 64-bit integer.
device-classes.default
Specifies that the device-class is used by default. Defaults to false. At least one value must be entered in the YAML file when this value is set to true. Takes a boolean value.
Important

A race condition prevents LVMS from accurately tracking the allocated space and preserving the spare-gb for a device class when multiple PVCs are created simultaneously. Use separate volume groups and device classes to protect the storage of highly dynamic workloads from each other.

2.11. Using the LVMS

To automatically provision and mount a logical volume to a pod, use the LVMS default StorageClass. By creating a PersistentVolumeClaim object without defining the .spec.storageClassName field, you trigger the dynamic provisioning of a PersistentVolume from this default resource.

Use the following procedure to provision and mount a logical volume to a pod.

Procedure

  • To provision and mount a logical volume to a pod, run the following command:

    $ cat <<EOF | oc apply -f -
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: my-lv-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1G
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: my-pod
    spec:
      containers:
      - name: nginx
        image: nginx
        command: ["/usr/bin/sh", "-c"]
        args: ["sleep", "1h"]
        volumeMounts:
        - mountPath: /mnt
          name: my-volume
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
              - ALL
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
      volumes:
        - name: my-volume
          persistentVolumeClaim:
            claimName: my-lv-pvc
    EOF
    Copy to Clipboard Toggle word wrap

2.11.1. Device classes

To define custom storage groups, create custom device classes by adding a device-classes array to your logical volume manager storage (LVMS) configuration. With this configuration, you can enable MicroShift to categorize devices based on your specific storage requirements.

Add the array to the /etc/microshift/lvmd.yaml configuration file. A single device class must be set as the default. You must restart MicroShift for configuration changes to take effect.

Warning

Removing a device class while there are still persistent volumes or VolumeSnapshotContent objects connected to that device class breaks both thick and thin provisioning.

You can define multiple device classes in the device-classes array. These classes can be a mix of thick and thin volume configurations.

Example of a mixed device-class array

socket-name: /run/topolvm/lvmd.sock
device-classes:
  - name: ssd
    volume-group: ssd-vg
    spare-gb: 0
    default: true
  - name: hdd
    volume-group: hdd-vg
    spare-gb: 0
  - name: thin
    spare-gb: 0
    thin-pool:
      name: thin
      overprovision-ratio: 10
    type: thin
    volume-group: ssd
  - name: striped
    volume-group: multi-pv-vg
    spare-gb: 0
    stripe: 2
    stripe-size: "64"
    lvcreate-options:
Copy to Clipboard Toggle word wrap

  • device-classes.spare-gb`: Specifies the spare capacity. When you set this value to anything other than 0, more space can be allocated than expected.
  • device-classes.lvcreate-options: Specifies extra arguments to pass to the lvcreate command, such as --type=<type>. Neither MicroShift nor the LVMS verifies lvcreate-options values. These optional values are passed as is to the lvcreate command. Ensure that the options specified here are correct.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top