Este conteúdo não está disponível no idioma selecionado.
Chapter 6. Dynamic storage using the LVMS plugin
MicroShift enables dynamic storage provisioning that is ready for immediate use with the logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. The LVMS plugin is the Red Hat downstream version of TopoLVM, a CSI plugin for managing logical volume management (LVM) logical volumes (LVs) for Kubernetes.
LVMS provisions new LVM logical volumes for container workloads with appropriately configured persistent volume claims (PVCs). Each PVC references a storage class that represents an LVM Volume Group (VG) on the host node. LVs are only provisioned for scheduled pods.
6.1. LVMS system requirements
Using LVMS in MicroShift requires the following system specifications.
6.1.1. Volume group name
					If you did not configure LVMS in an lvmd.yaml file placed in the /etc/microshift/ directory, MicroShift attempts to assign a default volume group (VG) dynamically by running the vgs command.
				
- MicroShift assigns a default VG when only one VG is found.
- 
							If more than one VG is present, the VG named microshiftis assigned as the default.
- 
							If a VG named microshiftdoes not exist, LVMS is not deployed.
If there are no volume groups on the MicroShift host, LVMS is disabled.
If you want to use a specific VG, LVMS must be configured to select that VG. You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.
You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.
					After MicroShift starts, you can update the lvmd.yaml to include or remove VGs. To implement changes, you must restart MicroShift. If the lvmd.yaml is deleted, MicroShift attempts to find a default VG again.
				
6.1.2. Volume size increments
					The LVMS provisions storage in increments of 1 gigabyte (GB). Storage requests are rounded up to the nearest GB. When the capacity of a VG is less than 1 GB, the PersistentVolumeClaim registers a ProvisioningFailed event, for example:
				
Example output
Warning ProvisioningFailed 3s (x2 over 5s) topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)
Warning  ProvisioningFailed    3s (x2 over 5s)  topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with
StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)6.2. Disabling and uninstalling LVMS CSI provider and CSI snapshot deployments
You can reduce the use of runtime resources such as RAM, CPU, and storage by removing or disabling the following storage components:
- You can configure MicroShift to disable the built-in logical volume manager storage (LVMS) Container Storage Interface (CSI) provider.
- You can configure MicroShift to disable the Container Storage Interface (CSI) snapshot capabilities.
- 
						You can uninstall the installed CSI implementations using occommands.
Automated uninstallation is not supported as this can cause orphaning of the provisioned volumes. Without the LVMS CSI driver, the node does not detect the underlying storage interface and cannot perform provisioning and deprovisioning or mounting and unmounting operations.
You can configure MicroShift to disable CSI provider and CSI snapshot only before installing and running MicroShift.After MicroShift is installed and running, you must update the configuration file and uninstall the components.
6.3. Disabling deployments that run CSI snapshot implementations
Use the following procedure to disable installation of the CSI implementation pods.
This procedure is for users who are defining the configuration file before installing and running MicroShift. If MicroShift is already started then CSI snapshot implementation will be running. Users must manually remove it by following the uninstallation instructions.
MicroShift will not delete CSI snapshot implementation pods. You must configure MicroShift to disable installation of the CSI snapshot implementation pods during the startup process.
Procedure
- Disable installation of the CSI snapshot controller by entering the - optionalCsiComponentsvalue under the- storagesection of the MicroShift configuration file in- /etc/microshift/config.yaml:- # ... storage: {} # ...- # ... storage: {}- 1 - # ...- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Accepted values are:- 
										Not defining optionalCsiComponents.
- 
										Specifying optionalCsiComponentsfield with an empty value ([]) or a single empty string element ([""]).
- Specifying - optionalCsiComponentswith one of the accepted values which are- snapshot-controller, or- none. A value of- noneis mutually exclusive with all other values.Note- If the - optionalCsiComponentsvalue is empty or null, MicroShift defaults to deploying snapshot-controller.
 
- 
										Not defining 
 
- After the - optionalCsiComponentsfield is specified with a supported value in the- config.yaml, start MicroShift by running the following command:- sudo systemctl start microshift - $ sudo systemctl start microshift- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- MicroShift does not redeploy the disabled components after a restart. 
6.4. Disabling deployments that run the CSI driver implementations
Use the following procedure to disable installation of the CSI implementation pods. MicroShift does not delete CSI driver implementation pods. You must configure MicroShift to disable installation of the CSI driver implementation pods during the startup process.
This procedure is for defining the configuration file before installing and running MicroShift. If MicroShift is already started, then the CSI driver implementation is running. You must manually remove it by following the uninstallation instructions.
Procedure
- Disable installation of the CSI driver by entering the - drivervalue under the- storagesection of the MicroShift configuration file in- /etc/microshift/config.yaml:- # ... storage driver: - "none" # ... - # ... storage driver: - "none"- 1 - # ...- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Valid values arenoneorlvms.
 Note- By default, the - drivervalue is empty or null and LVMS is deployed.
- Start MicroShift after the - driverfield is specified with a supported value in the- /etc/microshift/config.yamlfile by running the following command:- sudo systemctl enable --now microshift - $ sudo systemctl enable --now microshift- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- MicroShift does not redeploy the disabled components after a restart operation. 
6.5. Uninstalling the CSI snapshot implementation
To uninstall the installed CSI snapshot implementation, use the following procedure.
Prerequisites
- MicroShift is installed and running.
- The CSI snapshot implementation is deployed on the MicroShift node.
Procedure
- Uninstall the CSI snapshot implementation by running the following command: - oc delete -n kube-system deployment.apps/snapshot-controller - $ oc delete -n kube-system deployment.apps/snapshot-controller- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - deployment.apps "snapshot-controller" deleted - deployment.apps "snapshot-controller" deleted- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
6.6. Uninstalling the CSI driver implementation
To uninstall the installed CSI driver implementation, use the following procedure.
Prerequisites
- MicroShift is installed and running.
- The CSI driver implementation is deployed on the MicroShift node.
Procedure
- Delete the - lvmclustersobject by running the following command:- oc delete -n openshift-storage lvmclusters.lvm.topolvm.io/lvms - $ oc delete -n openshift-storage lvmclusters.lvm.topolvm.io/lvms- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - lvmcluster.lvm.topolvm.io "lvms" deleted - lvmcluster.lvm.topolvm.io "lvms" deleted- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Delete the - lvms-operatorby running the following command:- oc delete -n openshift-storage deployment.apps/lvms-operator - $ oc delete -n openshift-storage deployment.apps/lvms-operator- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - deployment.apps "lvms-operator" deleted - deployment.apps "lvms-operator" deleted- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Delete the - topolvm-provisioner- StorageClassby running the following command:- oc delete storageclasses.storage.k8s.io/topolvm-provisioner - $ oc delete storageclasses.storage.k8s.io/topolvm-provisioner- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - storageclass.storage.k8s.io "topolvm-provisioner" deleted - storageclass.storage.k8s.io "topolvm-provisioner" deleted- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
6.7. LVMS deployment
				LVMS is automatically deployed on to the node in the openshift-storage namespace after MicroShift starts.
			
				LVMS uses StorageCapacity tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the free storage of the volume group. For more information about StorageCapacity tracking, read Storage Capacity.
			
6.8. Limitations to configure the size of the devices used in LVM Storage
The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:
- The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
- The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE). - You can define the size of PE and LE during the physical and logical device creation.
- The default PE and LE size is 4 MB.
- If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
- The size limit for Red Hat Enterprise Linux (RHEL) 9 using the default PE and LE size is 8 EB.
- The following are the minimum storage sizes that you can request for each file system type: - 
										block: 8 MiB
- 
										xfs: 300 MiB
- 
										ext4: 32 MiB
 
- 
										
 
6.9. Creating an LVMS configuration file
				When MicroShift runs, it uses LVMS configuration from /etc/microshift/lvmd.yaml, if provided. You must place any configuration files that you create into the /etc/microshift/ directory.
			
Procedure
- To create the - lvmd.yamlconfiguration file, run the following command:- sudo cp /etc/microshift/lvmd.yaml.default /etc/microshift/lvmd.yaml - $ sudo cp /etc/microshift/lvmd.yaml.default /etc/microshift/lvmd.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
6.10. Basic LVMS configuration example
MicroShift supports passing through your LVM configuration and allows you to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. You can edit the LVMS configuration file you created at any time. You must restart MicroShift to deploy configuration changes after editing the file.
					If you need to take volume snapshots, you must use thin provisioning in your lvmd.conf file. If you do not need to take volume snapshots, you can use thick volumes.
				
				The following lvmd.yaml example file shows a basic LVMS configuration:
			
LVMS configuration example
- 1
- String. The UNIX domain socket endpoint of gRPC. Defaults to '/run/lvmd/lvmd.socket'.
- 2
- A list of maps for the settings for eachdevice-class.
- 3
- String. The name of thedevice-class.
- 4
- String. The group where thedevice-classcreates the logical volumes.
- 5
- Unsigned 64-bit integer. Storage capacity in GB to be left unallocated in the volume group. Defaults to0.
- 6
- Boolean. Indicates that thedevice-classis used by default. Defaults tofalse. At least one value must be entered in the YAML file values when this is set totrue.
					A race condition prevents LVMS from accurately tracking the allocated space and preserving the spare-gb for a device class when multiple PVCs are created simultaneously. Use separate volume groups and device classes to protect the storage of highly dynamic workloads from each other.
				
6.11. Using the LVMS
				The LVMS StorageClass is deployed with a default StorageClass. Any PersistentVolumeClaim objects without a .spec.storageClassName defined automatically has a PersistentVolume provisioned from the default StorageClass. Use the following procedure to provision and mount a logical volume to a pod.
			
Procedure
- To provision and mount a logical volume to a pod, run the following command: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
6.11.1. Device classes
					You can create custom device classes by adding a device-classes array to your logical volume manager storage (LVMS) configuration. Add the array to the /etc/microshift/lvmd.yaml configuration file. A single device class must be set as the default. You must restart MicroShift for configuration changes to take effect.
				
						Removing a device class while there are still persistent volumes or VolumeSnapshotContent objects connected to that device class breaks both thick and thin provisioning.
					
					You can define multiple device classes in the device-classes array. These classes can be a mix of thick and thin volume configurations.
				
Example of a mixed device-class array
- 1
- When you set the spare capacity to anything other than0, more space can be allocated than expected.
- 2
- Extra arguments to pass to thelvcreatecommand, such as--type=<type>. Neither MicroShift nor the LVMS verifieslvcreate-optionsvalues. These optional values are passed as is to thelvcreatecommand. Ensure that the options specified here are correct.