Chapter 2. Using dynamic storage with the LVMS plugin
By using dynamic provisioning, you can create storage volumes on-demand to eliminate the need for pre-provisioned storage.
MicroShift enables dynamic storage provisioning that is ready for immediate use with the logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. The LVMS plugin is the Red Hat downstream version of TopoLVM, a CSI plugin for managing logical volume management (LVM) logical volumes (LVs) for Kubernetes.
LVMS provisions new LVM logical volumes for container workloads with appropriately configured persistent volume claims (PVCs). Each PVC references a storage class that represents an LVM Volume Group (VG) on the host node. LVs are only provisioned for scheduled pods.
2.1. LVMS system requirements Copy linkLink copied to clipboard!
To prepare your infrastructure for storage operations, review the system specifications for using LVMS in MicroShift. Verifying these requirements ensures your environment meets the necessary resource standards for a successful deployment.
2.1.1. Volume group name Copy linkLink copied to clipboard!
If you did not configure LVMS in an lvmd.yaml file placed in the /etc/microshift/ directory, MicroShift attempts to assign a default volume group (VG) dynamically by running the vgs command.
- MicroShift assigns a default VG when only one VG is found.
-
If more than one VG is present, the VG named
microshiftis assigned as the default. -
If a VG named
microshiftdoes not exist, LVMS is not deployed.
If there are no volume groups on the MicroShift host, LVMS is disabled.
If you want to use a specific VG, LVMS must be configured to select that VG. You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.
You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.
After MicroShift starts, you can update the lvmd.yaml to include or remove VGs. To implement changes, you must restart MicroShift. If the lvmd.yaml is deleted, MicroShift attempts to find a default VG again.
2.1.2. Volume size increments Copy linkLink copied to clipboard!
The LVMS provisions storage in increments of 1 gigabyte (GB). Storage requests are rounded up to the nearest GB. When the capacity of a VG is less than 1 GB, the PersistentVolumeClaim registers a ProvisioningFailed event, for example:
Example output
Warning ProvisioningFailed 3s (x2 over 5s) topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)
Warning ProvisioningFailed 3s (x2 over 5s) topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with
StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)
2.2. Disabling and uninstalling LVMS CSI provider and CSI snapshot deployments Copy linkLink copied to clipboard!
To reduce the use of runtime resources, such as RAM, CPU, and storage, remove or disable the LVMS CSI provider and CSI snapshot deployments. This configuration optimizes system performance by eliminating storage components that are not required for your specific workload.
You can configure MicroShift to disable CSI provider and CSI snapshot only before installing and running MicroShift. After MicroShift is installed and running, you must update the configuration file and uninstall the components.
To reduce the use of runtime resources, you can remove or disable the following storage components:
- You can configure MicroShift to disable the built-in logical volume manager storage (LVMS) Container Storage Interface (CSI) provider.
- You can configure MicroShift to disable the Container Storage Interface (CSI) snapshot capabilities.
-
You can uninstall the installed CSI implementations using
occommands.
Automated uninstallation is not supported as this can cause orphaning of the provisioned volumes. Without the LVMS CSI driver, the node does not detect the underlying storage interface and cannot perform provisioning and deprovisioning or mounting and unmounting operations.
2.3. Disabling deployments that run CSI snapshot implementations Copy linkLink copied to clipboard!
To prevent the installation of CSI implementation pods, disable the deployments that run CSI snapshot implementations. This configuration conserves system resources by ensuring that snapshot components are not deployed when they are not required.
Use the procedure if you are defining the configuration file before installing and running MicroShift. If MicroShift is already started, the CSI snapshot implementation will be running. You must manually remove the implementation by following the uninstallation instructions.
MicroShift does not delete CSI snapshot implementation pods. You must configure MicroShift to disable installation of the CSI snapshot implementation pods during the startup process.
Procedure
Disable installation of the CSI snapshot controller by entering the
optionalCsiComponentsvalue under thestoragesection of the MicroShift configuration file in/etc/microshift/config.yaml:# ... storage: {} # ...# ... storage: {} # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
storage: Specifies the storage details. You can choose to not defineoptionalCsiComponents. If you do specify theoptionalCsiComponentsfield, valid values include: an empty value ([]) or a single empty string element ([""]),snapshot-controller, ornone. A value ofnoneis mutually exclusive with all other values.NoteIf the
optionalCsiComponentsvalue is empty or null, MicroShift defaults to deployingsnapshot-controller.After the
optionalCsiComponentsfield is specified with a supported value in theconfig.yaml, start MicroShift by running the following command:sudo systemctl start microshift
$ sudo systemctl start microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteMicroShift does not redeploy the disabled components after a restart.
2.4. Disabling deployments that run the CSI driver implementations Copy linkLink copied to clipboard!
You can disable installation of the CSI implementation pods. MicroShift does not delete CSI driver implementation pods. You must configure MicroShift to disable installation of the CSI driver implementation pods during the startup process.
This procedure is for defining the configuration file before installing and running MicroShift. If MicroShift is already started, then the CSI driver implementation is running. You must manually remove it by following the uninstallation instructions.
Procedure
Disable installation of the CSI driver by entering the
drivervalue under thestoragesection of the MicroShift configuration file in/etc/microshift/config.yaml:# ... storage driver: - "none" # ...
# ... storage driver: - "none" # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
storage.driver.noneSpecifies the driver to disable. Valid values are
noneorlvms.NoteBy default, the
drivervalue is empty or null and LVMS is deployed.
Start MicroShift after the
driverfield is specified with a supported value in the/etc/microshift/config.yamlfile by running the following command:sudo systemctl enable --now microshift
$ sudo systemctl enable --now microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteMicroShift does not redeploy the disabled components after a restart operation.
2.5. Uninstalling the CSI snapshot implementation Copy linkLink copied to clipboard!
To remove the Container Storage Interface (CSI) snapshot capability from your cluster, uninstall the CSI snapshot implementation.
Prerequisites
- MicroShift is installed and running.
- The CSI snapshot implementation is deployed on the MicroShift node.
Procedure
Uninstall the CSI snapshot implementation by running the following command:
oc delete -n kube-system deployment.apps/snapshot-controller
$ oc delete -n kube-system deployment.apps/snapshot-controllerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
deployment.apps "snapshot-controller" deleted
deployment.apps "snapshot-controller" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Uninstalling the CSI driver implementation Copy linkLink copied to clipboard!
To remove the Container Storage Interface (CSI) integration from your cluster, uninstall the CSI driver implementation.
Prerequisites
- MicroShift is installed and running.
- The CSI driver implementation is deployed on the MicroShift node.
Procedure
Delete the
lvmclustersobject by running the following command:oc delete -n openshift-storage lvmclusters.lvm.topolvm.io/lvms
$ oc delete -n openshift-storage lvmclusters.lvm.topolvm.io/lvmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
lvmcluster.lvm.topolvm.io "lvms" deleted
lvmcluster.lvm.topolvm.io "lvms" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
lvms-operatorby running the following command:oc delete -n openshift-storage deployment.apps/lvms-operator
$ oc delete -n openshift-storage deployment.apps/lvms-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
deployment.apps "lvms-operator" deleted
deployment.apps "lvms-operator" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
topolvm-provisionerStorageClassby running the following command:oc delete storageclasses.storage.k8s.io/topolvm-provisioner
$ oc delete storageclasses.storage.k8s.io/topolvm-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
storageclass.storage.k8s.io "topolvm-provisioner" deleted
storageclass.storage.k8s.io "topolvm-provisioner" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. LVMS deployment Copy linkLink copied to clipboard!
To ensure local storage is ready for use, MicroShift automatically deploys LVMS into the openshift-storage namespace at startup. This automated process prepares the node for storage operations immediately, eliminating the need for manual installation.
LVMS uses StorageCapacity tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the free storage of the volume group. For more information about StorageCapacity tracking, see "Storage Capacity".
Additional resources
2.8. Limitations to configure the size of the devices used in LVM Storage Copy linkLink copied to clipboard!
To ensure your devices are compatible with storage operations, review the size configuration limitations in LVM Storage. Adhering to these constraints prevents provisioning failures by ensuring selected devices meet the required capacity specifications.
When provisioning storage by using LVM Storage, the following factors limit device size:
- The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).
- You can define the size of PE and LE during the physical and logical device creation.
- The default PE and LE size is 4 MiB.
- If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
- The size limit for Red Hat Enterprise Linux (RHEL) 9 by using the default PE and LE size is 8 EB.
The following are the minimum storage sizes that you can request for each file system type:
-
block: 8 MiB -
xfs: 300 MiB -
ext4: 32 MiB
-
2.9. Creating an LVMS configuration file Copy linkLink copied to clipboard!
To customize storage settings, create an LVMS configuration file named lvmd.yaml. You must place this file in the /etc/microshift/ directory to ensure MicroShift detects and applies your configuration at startup.
Procedure
To create the
lvmd.yamlconfiguration file, run the following command:sudo cp /etc/microshift/lvmd.yaml.default /etc/microshift/lvmd.yaml
$ sudo cp /etc/microshift/lvmd.yaml.default /etc/microshift/lvmd.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.10. Basic LVMS configuration example Copy linkLink copied to clipboard!
To customize storage operations, pass through your LVM configuration to MicroShift. With this flexibility, you can define custom volume groups, thin volume provisioning parameters, and reserved unallocated space by editing the LVMS configuration file.
You must restart MicroShift to deploy configuration changes after editing the file.
If you need to take volume snapshots, you must use thin provisioning in your lvmd.conf file. If you do not need to take volume snapshots, you can use thick volumes.
The following lvmd.yaml example file shows a basic LVMS configuration:
LVMS configuration example
+ where:
+
socket-name-
Specifies the UNIX domain socket endpoint of gRPC. Defaults to
/run/lvmd/lvmd.socket. Takes a string value. device-classes-
Specifies a list of maps for the settings for each
device-class. device-classes.name-
Specifies the name of the
device-class. Takes a string value. device-classes.volume-group-
Specifies the group where the
device-classcreates the logical volumes. Takes a string value. device-classes.spare-gb-
Specifies the storage capacity in GB to be left unallocated in the volume group. Defaults to
0. Takes an unsigned 64-bit integer. device-classes.default-
Specifies that the
device-classis used by default. Defaults tofalse. At least one value must be entered in the YAML file when this value is set totrue. Takes a boolean value.
A race condition prevents LVMS from accurately tracking the allocated space and preserving the spare-gb for a device class when multiple PVCs are created simultaneously. Use separate volume groups and device classes to protect the storage of highly dynamic workloads from each other.
2.11. Using the LVMS Copy linkLink copied to clipboard!
To automatically provision and mount a logical volume to a pod, use the LVMS default StorageClass. By creating a PersistentVolumeClaim object without defining the .spec.storageClassName field, you trigger the dynamic provisioning of a PersistentVolume from this default resource.
Use the following procedure to provision and mount a logical volume to a pod.
Procedure
To provision and mount a logical volume to a pod, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11.1. Device classes Copy linkLink copied to clipboard!
To define custom storage groups, create custom device classes by adding a device-classes array to your logical volume manager storage (LVMS) configuration. With this configuration, you can enable MicroShift to categorize devices based on your specific storage requirements.
Add the array to the /etc/microshift/lvmd.yaml configuration file. A single device class must be set as the default. You must restart MicroShift for configuration changes to take effect.
Removing a device class while there are still persistent volumes or VolumeSnapshotContent objects connected to that device class breaks both thick and thin provisioning.
You can define multiple device classes in the device-classes array. These classes can be a mix of thick and thin volume configurations.
Example of a mixed device-class array
-
device-classes.spare-gb`: Specifies the spare capacity. When you set this value to anything other than
0, more space can be allocated than expected. -
device-classes.lvcreate-options: Specifies extra arguments to pass to thelvcreatecommand, such as--type=<type>. Neither MicroShift nor the LVMS verifieslvcreate-optionsvalues. These optional values are passed as is to thelvcreatecommand. Ensure that the options specified here are correct.