Este conteúdo não está disponível no idioma selecionado.
Chapter 5. Persistent storage using local storage
5.1. Local storage overview Copiar o linkLink copiado para a área de transferência!
You can use any of the following solutions to provision local storage:
- HostPath Provisioner (HPP)
- Local Storage Operator (LSO)
- Logical Volume Manager (LVM) Storage
These solutions support provisioning only node-local storage. The workloads are bound to the nodes that provide the storage. If the node becomes unavailable, the workload also becomes unavailable. To maintain workload availability despite node failures, you must ensure storage data replication through active or passive replication mechanisms.
5.1.1. Overview of HostPath Provisioner functionality Copiar o linkLink copiado para a área de transferência!
You can perform the following actions using HostPath Provisioner (HPP):
- Map the host filesystem paths to storage classes for provisioning local storage.
- Statically create storage classes to configure filesystem paths on a node for storage consumption.
- Statically provision Persistent Volumes (PVs) based on the storage class.
- Create workloads and PersistentVolumeClaims (PVCs) while being aware of the underlying storage topology.
HPP is available in upstream Kubernetes. However, it is not recommended to use HPP from upstream Kubernetes.
5.1.2. Overview of Local Storage Operator functionality Copiar o linkLink copiado para a área de transferência!
You can perform the following actions using Local Storage Operator (LSO):
- Assign the storage devices (disks or partitions) to the storage classes without modifying the device configuration.
-
Statically provision PVs and storage classes by configuring the
LocalVolumecustom resource (CR). - Create workloads and PVCs while being aware of the underlying storage topology.
LSO is developed and delivered by Red Hat.
5.1.3. Overview of LVM Storage functionality Copiar o linkLink copiado para a área de transferência!
You can perform the following actions using Logical Volume Manager (LVM) Storage:
- Configure storage devices (disks or partitions) as lvm2 volume groups and expose the volume groups as storage classes.
- Create workloads and request storage by using PVCs without considering the node topology.
LVM Storage uses the TopoLVM CSI driver to dynamically allocate storage space to the nodes in the topology and provision PVs.
LVM Storage is developed and maintained by Red Hat. The CSI driver provided with LVM Storage is the upstream project "topolvm".
5.1.4. Comparison of LVM Storage, LSO, and HPP Copiar o linkLink copiado para a área de transferência!
The following sections compare the functionalities provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage.
5.1.4.1. Comparison of the support for storage types and filesystems Copiar o linkLink copiado para a área de transferência!
The following table compares the support for storage types and filesystems provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage:
| Functionality | LVM Storage | LSO | HPP |
|---|---|---|---|
| Support for block storage | Yes | Yes | No |
| Support for file storage | Yes | Yes | Yes |
| Support for object storage [1] | No | No | No |
| Available filesystems |
|
| Any mounted system available on the node is supported. |
-
None of the solutions (LVM Storage, LSO, and HPP) provide support for object storage. Therefore, if you want to use object storage, you need an S3 object storage solution, such as
MultiClusterGatewayfrom the Red Hat OpenShift Data Foundation. All of the solutions can serve as underlying storage providers for the S3 object storage solutions.
5.1.4.2. Comparison of the support for core functionalities Copiar o linkLink copiado para a área de transferência!
The following table compares how LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) support core functionalities for provisioning local storage:
| Functionality | LVM Storage | LSO | HPP |
|---|---|---|---|
| Support for automatic file system formatting | Yes | Yes | N/A |
| Support for dynamic provisioning | Yes | No | No |
| Support for using software Redundant Array of Independent Disks (RAID) arrays | Yes Supported on 4.15 and later. | Yes | Yes |
| Support for transparent disk encryption | Yes Supported on 4.16 and later. | Yes | Yes |
| Support for volume based disk encryption | No | No | No |
| Support for disconnected installation | Yes | Yes | Yes |
| Support for PVC expansion | Yes | No | No |
| Support for volume snapshots and volume clones | Yes | No | No |
| Support for thin provisioning | Yes Devices are thin-provisioned by default. | Yes You can configure the devices to point to the thin-provisioned volumes | Yes You can configure a path to point to the thin-provisioned volumes. |
| Support for automatic disk discovery and setup | Yes
Automatic disk discovery is available during installation and runtime. You can also dynamically add the disks to the | Technology Preview Automatic disk discovery is available during installation. | No |
5.1.4.3. Comparison of performance and isolation capabilities Copiar o linkLink copiado para a área de transferência!
The following table compares the performance and isolation capabilities of LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) in provisioning local storage.
| Functionality | LVM Storage | LSO | HPP |
|---|---|---|---|
| Performance | I/O speed is shared for all workloads that use the same storage class. Block storage allows direct I/O operations. Thin provisioning can affect the performance. | I/O depends on the LSO configuration. Block storage allows direct I/O operations. | I/O speed is shared for all workloads that use the same storage class. The restrictions imposed by the underlying filesystem can affect the I/O speed. |
| Isolation boundary [1] | LVM Logical Volume (LV) It provides higher level of isolation compared to HPP. | LVM Logical Volume (LV) It provides higher level of isolation compared to HPP | Filesystem path It provides lower level of isolation compared to LSO and LVM Storage. |
- Isolation boundary refers to the level of separation between different workloads or applications that use local storage resources.
5.1.4.4. Comparison of the support for additional functionalities Copiar o linkLink copiado para a área de transferência!
The following table compares the additional features provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage:
| Functionality | LVM Storage | LSO | HPP |
|---|---|---|---|
| Support for generic ephemeral volumes | Yes | No | No |
| Support for CSI inline ephemeral volumes | No | No | No |
| Support for storage topology | Yes Supports CSI node topology | Yes LSO provides partial support for storage topology through node tolerations. | No |
|
Support for | No | No | No |
-
All of the solutions (LVM Storage, LSO, and HPP) have the
ReadWriteOnce(RWO) access mode. RWO access mode allows access from multiple pods on the same node.
5.2. Persistent storage using local volumes Copiar o linkLink copiado para a área de transferência!
OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface.
Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.
Local volumes can only be used as a statically created persistent volume.
5.2.1. Installing the Local Storage Operator Copiar o linkLink copiado para a área de transferência!
The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster.
Prerequisites
- Access to the OpenShift Container Platform web console or command-line interface (CLI).
Procedure
Create the
openshift-local-storageproject:oc adm new-project openshift-local-storage
$ oc adm new-project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Allow local storage creation on infrastructure nodes.
You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring.
You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes.
To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command:
oc annotate namespace openshift-local-storage openshift.io/node-selector=''
$ oc annotate namespace openshift-local-storage openshift.io/node-selector=''Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Allow local storage to run on the management pool of CPUs in single-node deployment.
Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the
managementpool. Perform this step on single-node installations that use management workload partitioning.To allow Local Storage Operator to run on the management CPU pool, run following commands:
oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'
$ oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
From the UI
To install the Local Storage Operator from the web console, follow these steps:
- Log in to the OpenShift Container Platform web console.
-
Navigate to Operators
OperatorHub. - Type Local Storage into the filter box to locate the Local Storage Operator.
- Click Install.
- On the Install Operator page, select A specific namespace on the cluster. Select openshift-local-storage from the drop-down menu.
- Adjust the values for Update Channel and Approval Strategy to the values that you want.
- Click Install.
Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console.
From the CLI
Install the Local Storage Operator from the CLI.
Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as
openshift-local-storage.yaml:Example openshift-local-storage.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The user approval policy for an install plan.
Create the Local Storage Operator object by entering the following command:
oc apply -f openshift-local-storage.yaml
$ oc apply -f openshift-local-storage.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Verify local storage installation by checking that all pods and the Local Storage Operator have been created:
Check that all the required pods have been created:
oc -n openshift-local-storage get pods
$ oc -n openshift-local-storage get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m
NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the
openshift-local-storageproject:oc get csvs -n openshift-local-storage
$ oc get csvs -n openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded
NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After all checks have passed, the Local Storage Operator is installed successfully.
5.2.2. Provisioning local volumes by using the Local Storage Operator Copiar o linkLink copiado para a área de transferência!
Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.
Prerequisites
- The Local Storage Operator is installed.
You have a local disk that meets the following conditions:
- It is attached to a node.
- It is not mounted.
- It does not contain partitions.
Procedure
Create the local volume resource. This resource must define the nodes and paths to the local volumes.
NoteDo not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs).
Example: Filesystem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where the Local Storage Operator is installed.
- 2
- Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from
oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. - 3
- The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
- 4
- This setting defines whether or not to call
wipefs, which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator (LSO) provisioning. No other data besides signatures is erased. The default is "false" (wipefsis not invoked). SettingforceWipeDevicesAndDestroyAllDatato "true" can be useful in scenarios where previous data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. Such cases can include single-node OpenShift (SNO) cluster environments where a node can be redeployed multiple times or when using OpenShift Data Foundation (ODF), where previous data can remain on the disks planned to be consumed as object storage devices (OSDs). - 5
- The volume mode, either
FilesystemorBlock, that defines the type of local volumes.NoteA raw block volume (
volumeMode: Block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. - 6
- The file system that is created when the local volume is mounted for the first time.
- 7
- The path containing a list of local storage devices to choose from.
- 8
- Replace this value with your actual local disks filepath to the
LocalVolumeresourceby-id, such as/dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.NoteIf you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the
virsh edit <VM>command to add the<serial>mydisk</serial>definition.
Example: Block
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where the Local Storage Operator is installed.
- 2
- Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from
oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. - 3
- The name of the storage class to use when creating persistent volume objects.
- 4
- This setting defines whether or not to call
wipefs, which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator (LSO) provisioning. No other data besides signatures is erased. The default is "false" (wipefsis not invoked). SettingforceWipeDevicesAndDestroyAllDatato "true" can be useful in scenarios where previous data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. Such cases can include single-node OpenShift (SNO) cluster environments where a node can be redeployed multiple times or when using OpenShift Data Foundation (ODF), where previous data can remain on the disks planned to be consumed as object storage devices (OSDs). - 5
- The volume mode, either
FilesystemorBlock, that defines the type of local volumes. - 6
- The path containing a list of local storage devices to choose from.
- 7
- Replace this value with your actual local disks filepath to the
LocalVolumeresourceby-id, such asdev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
NoteIf you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the
virsh edit <VM>command to add the<serial>mydisk</serial>definition.Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created:
oc create -f <local-volume>.yaml
$ oc create -f <local-volume>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the provisioner was created and that the corresponding daemon sets were created:
oc get all -n openshift-local-storage
$ oc get all -n openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the desired and current number of daemon set processes. A desired count of
0indicates that the label selectors were invalid.Verify that the persistent volumes were created:
oc get pv
$ oc get pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation.
5.2.3. Provisioning local volumes without the Local Storage Operator Copiar o linkLink copiado para a área de transferência!
Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.
Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs.
Prerequisites
- Local disks are attached to the OpenShift Container Platform nodes.
Procedure
Define the PV. Create a file, such as
example-pv-filesystem.yamlorexample-pv-block.yaml, with thePersistentVolumeobject definition. This resource must define the nodes and paths to the local volumes.NoteDo not use different storage class names for the same device. Doing so will create multiple PVs.
example-pv-filesystem.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The volume mode, either
FilesystemorBlock, that defines the type of PVs. - 2
- The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs.
- 3
- The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with
FilesystemvolumeMode.
NoteA raw block volume (
volumeMode: block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.example-pv-block.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created:
oc create -f <example-pv>.yaml
$ oc create -f <example-pv>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the local PV was created:
oc get pv
$ oc get pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.4. Creating the local volume persistent volume claim Copiar o linkLink copiado para a área de transferência!
Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod.
Prerequisites
- Persistent volumes have been created using the local volume provisioner.
Procedure
Create the PVC using the corresponding storage class:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created:
oc create -f <local-pvc>.yaml
$ oc create -f <local-pvc>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.5. Attach the local claim Copiar o linkLink copiado para a área de transferência!
After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource.
Prerequisites
- A persistent volume claim exists in the same namespace.
Procedure
Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the volume to mount.
- 2
- The path inside the pod where the volume is mounted. Do not mount to the container root,
/, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/ptsfiles. It is safe to mount the host by using/host. - 3
- The name of the existing persistent volume claim to use.
Create the resource in the OpenShift Container Platform cluster, specifying the file you just created:
oc create -f <local-pod>.yaml
$ oc create -f <local-pod>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.6. Automating discovery and provisioning for local storage devices Copiar o linkLink copiado para a área de transferência!
The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices.
Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment.
Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices.
Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet that target a node more than once is not supported.
Prerequisites
- You have cluster administrator permissions.
- You have installed the Local Storage Operator.
- You have attached local disks to OpenShift Container Platform nodes.
-
You have access to the OpenShift Container Platform web console and the
occommand-line interface (CLI).
Procedure
To enable automatic discovery of local devices from the web console:
-
Click Operators
Installed Operators. -
In the
openshift-local-storagenamespace, click Local Storage. - Click the Local Volume Discovery tab.
- Click Create Local Volume Discovery and then select either Form view or YAML view.
-
Configure the
LocalVolumeDiscoveryobject parameters. Click Create.
The Local Storage Operator creates a local volume discovery instance named
auto-discover-devices.
-
Click Operators
To display a continuous list of available devices on a node:
- Log in to the OpenShift Container Platform web console.
-
Navigate to Compute
Nodes. - Click the node name that you want to open. The "Node Details" page is displayed.
Select the Disks tab to display the list of the selected devices.
The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode.
To automatically provision local volumes for the discovered devices from the web console:
-
Navigate to Operators
Installed Operators and select Local Storage from the list of Operators. -
Select Local Volume Set
Create Local Volume Set. - Enter a volume set name and a storage class name.
Choose All nodes or Select nodes to apply filters accordingly.
NoteOnly worker nodes are available, regardless of whether you filter using All nodes or Select nodes.
Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create.
A message displays after several minutes, indicating that the "Operator reconciled successfully."
-
Navigate to Operators
Alternatively, to provision local volumes for the discovered devices from the CLI:
Create an object YAML file to define the local volume set, such as
local-volume-set.yaml, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
- 2
- When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices.
Create the local volume set object:
oc apply -f local-volume-set.yaml
$ oc apply -f local-volume-set.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the local persistent volumes were dynamically provisioned based on the storage class:
oc get pv
$ oc get pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Results are deleted after they are removed from the node. Symlinks must be manually removed.
5.2.7. Using tolerations with Local Storage Operator pods Copiar o linkLink copiado para a área de transferência!
Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes.
You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node.
Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect. An operator allows you to leave one of these parameters empty.
Prerequisites
- The Local Storage Operator is installed.
- Local disks are attached to OpenShift Container Platform nodes with a taint.
- Tainted nodes are expected to provision local storage.
Procedure
To configure local volumes for scheduling on tainted nodes:
Modify the YAML file that defines the
Podand add theLocalVolumespec, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the key that you added to the node.
- 2
- Specify the
Equaloperator to require thekey/valueparameters to match. If operator isExists, the system checks that the key exists and ignores the value. If operator isEqual, then the key and value must match. - 3
- Specify the value
localof the tainted node. - 4
- The volume mode, either
FilesystemorBlock, defining the type of the local volumes. - 5
- The path containing a list of local storage devices to choose from.
Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the
LocalVolumespec, as shown in the following example:spec: tolerations: - key: node-role.kubernetes.io/master operator: Existsspec: tolerations: - key: node-role.kubernetes.io/master operator: ExistsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints.
5.2.8. Local Storage Operator Metrics Copiar o linkLink copiado para a área de transferência!
OpenShift Container Platform provides the following metrics for the Local Storage Operator:
-
lso_discovery_disk_count: total number of discovered devices on each node -
lso_lvset_provisioned_PV_count: total number of PVs created byLocalVolumeSetobjects -
lso_lvset_unmatched_disk_count: total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria -
lso_lvset_orphaned_symlink_count: number of devices with PVs that no longer matchLocalVolumeSetobject criteria -
lso_lv_orphaned_symlink_count: number of devices with PVs that no longer matchLocalVolumeobject criteria -
lso_lv_provisioned_PV_count: total number of provisioned PVs forLocalVolume
To use these metrics, enable them by doing one of the following:
- When installing the Local Storage Operator from OperatorHub in the web console, select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
Manually add the
openshift.io/cluster-monitoring=truelabel to the Operator namespace by running the following command:oc label ns/openshift-local-storage openshift.io/cluster-monitoring=true
$ oc label ns/openshift-local-storage openshift.io/cluster-monitoring=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information about metrics, see Accessing metrics as an administrator.
5.2.9. Deleting the Local Storage Operator resources Copiar o linkLink copiado para a área de transferência!
5.2.9.1. Removing a local volume or local volume set Copiar o linkLink copiado para a área de transferência!
Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed.
The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource.
Prerequisites
The persistent volume must be in a
ReleasedorAvailablestate.WarningDeleting a persistent volume that is still in use can result in data loss or corruption.
Procedure
Edit the previously created local volume to remove any unwanted disks.
Edit the cluster resource:
oc edit localvolume <local_volume_name> -n openshift-local-storage
$ oc edit localvolume <local_volume_name> -n openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the lines under
devicePaths, and delete any representing unwanted disks.
Delete any persistent volumes created.
oc delete pv <pv_name>
$ oc delete pv <pv_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete directory and included symlinks on the node.
WarningThe following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability.
oc debug node/<node_name> -- chroot /host rm -rf /mnt/local-storage/<sc_name>
$ oc debug node/<node_name> -- chroot /host rm -rf /mnt/local-storage/<sc_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the storage class used to create the local volumes.
5.2.9.2. Uninstalling the Local Storage Operator Copiar o linkLink copiado para a área de transferência!
To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project.
Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator’s removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources.
Prerequisites
- Access to the OpenShift Container Platform web console.
Procedure
Delete any local volume resources installed in the project, such as
localvolume,localvolumeset, andlocalvolumediscoveryby running the following commands:oc delete localvolume --all --all-namespaces
$ oc delete localvolume --all --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete localvolumeset --all --all-namespaces
$ oc delete localvolumeset --all --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete localvolumediscovery --all --all-namespaces
$ oc delete localvolumediscovery --all --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Uninstall the Local Storage Operator from the web console.
- Log in to the OpenShift Container Platform web console.
-
Navigate to Operators
Installed Operators. - Type Local Storage into the filter box to locate the Local Storage Operator.
-
Click the Options menu
at the end of the Local Storage Operator.
- Click Uninstall Operator.
- Click Remove in the window that appears.
The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command:
oc delete pv <pv-name>
$ oc delete pv <pv-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
openshift-local-storageproject by running the following command:oc delete project openshift-local-storage
$ oc delete project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Persistent storage using hostPath Copiar o linkLink copiado para a área de transferência!
A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node’s filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.
The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node.
5.3.1. Overview Copiar o linkLink copiado para a área de transferência!
OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster.
In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning.
A hostPath volume must be provisioned statically.
Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host. The following example shows the / directory from the host being mounted into the container at /host.
5.3.2. Statically provisioning hostPath volumes Copiar o linkLink copiado para a área de transferência!
A pod that uses a hostPath volume must be referenced by manual (static) provisioning.
Procedure
Define the persistent volume (PV) by creating a
pv.yamlfile with thePersistentVolumeobject definition:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods.
- 2
- Used to bind persistent volume claim (PVC) requests to the PV.
- 3
- The volume can be mounted as
read-writeby a single node. - 4
- The configuration file specifies that the volume is at
/mnt/dataon the cluster’s node. To avoid corrupting your host system, do not mount to the container root,/, or any path that is the same in the host and the container. You can safely mount the host by using/host
Create the PV from the file:
oc create -f pv.yaml
$ oc create -f pv.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define the PVC by creating a
pvc.yamlfile with thePersistentVolumeClaimobject definition:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the PVC from the file:
oc create -f pvc.yaml
$ oc create -f pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.3. Mounting the hostPath share in a privileged pod Copiar o linkLink copiado para a área de transferência!
After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
Prerequisites
- A persistent volume claim exists that is mapped to the underlying hostPath share.
Procedure
Create a privileged pod that mounts the existing persistent volume claim:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the pod.
- 2
- The pod must run as privileged to access the node’s storage.
- 3
- The path to mount the host path share inside the privileged pod. Do not mount to the container root,
/, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/ptsfiles. It is safe to mount the host by using/host. - 4
- The name of the
PersistentVolumeClaimobject that has been previously created.
5.4. Persistent storage using Logical Volume Manager Storage Copiar o linkLink copiado para a área de transferência!
Logical Volume Manager (LVM) Storage uses LVM2 through the TopoLVM CSI driver to dynamically provision local storage on a cluster with limited resources.
You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.
5.4.1. Logical Volume Manager Storage installation Copiar o linkLink copiado para a área de transferência!
You can install Logical Volume Manager (LVM) Storage on an OpenShift Container Platform cluster and configure it to dynamically provision storage for your workloads.
You can install LVM Storage by using the OpenShift Container Platform CLI (oc), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM).
When using LVM Storage on multi-node clusters, LVM Storage only supports provisioning local storage. LVM Storage does not support storage data replication mechanisms across nodes. You must ensure storage data replication through active or passive replication mechanisms to avoid a single point of failure.
5.4.1.1. Prerequisites to install LVM Storage Copiar o linkLink copiado para a área de transferência!
The prerequisites to install LVM Storage are as follows:
- Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.
- Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.
NoteYou cannot wipe the disks that are in use.
- If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. See the "Installing LVM Storage using RHACM" section.
5.4.1.2. Installing LVM Storage by using the CLI Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can install LVM Storage by using the OpenShift CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to OpenShift Container Platform as a user with
cluster-adminand Operator installation permissions.
Procedure
Create a YAML file with the configuration for creating a namespace:
Example YAML configuration for creating a namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace by running the following command:
oc create -f <file_name>
$ oc create -f <file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
OperatorGroupCR YAML file:Example
OperatorGroupCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
OperatorGroupCR by running the following command:oc create -f <file_name>
$ oc create -f <file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
SubscriptionCR YAML file:Example
SubscriptionCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
SubscriptionCR by running the following command:oc create -f <file_name>
$ oc create -f <file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that LVM Storage is installed, run the following command:
oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase
$ oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phaseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Name Phase 4.13.0-202301261535 Succeeded
Name Phase 4.13.0-202301261535 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.1.3. Installing LVM Storage by using the web console Copiar o linkLink copiado para a área de transferência!
You can install LVM Storage by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the cluster.
-
You have access to OpenShift Container Platform with
cluster-adminand Operator installation permissions.
Procedure
- Log in to the OpenShift Container Platform web console.
-
Click Operators
OperatorHub. - Click LVM Storage on the OperatorHub page.
Set the following options on the Operator Installation page:
- Update Channel as stable-4.17.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If the
openshift-storagenamespace does not exist, it is created during the operator installation. Update approval as Automatic or Manual.
NoteIf you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically updates the running instance of LVM Storage without any intervention.
If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must manually approve the update request to update LVM Storage to a newer version.
- Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
- Click Install.
Verification steps
- Verify that LVM Storage shows a green tick, indicating successful installation.
5.4.1.4. Installing LVM Storage in a disconnected environment Copiar o linkLink copiado para a área de transferência!
You can install LVM Storage on OpenShift Container Platform in a disconnected environment. All sections referenced in this procedure are linked in the "Additional resources" section.
Prerequisites
- You read the "About disconnected installation mirroring" section.
- You have access to the OpenShift Container Platform image repository.
- You created a mirror registry.
Procedure
Follow the steps in the "Creating the image set configuration" procedure. To create an
ImageSetConfigurationcustom resource (CR) for LVM Storage, you can use the following exampleImageSetConfigurationCR configuration:Example
ImageSetConfigurationCR for LVM StorageCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the maximum size (in GiB) of each file within the image set.
- 2
- Specify the location in which you want to save the image set. This location can be a registry or a local directory. You must configure the
storageConfigfield unless you are using the Technology Preview OCI feature. - 3
- Specify the storage URL for the image stream when using a registry. For more information, see Why use imagestreams.
- 4
- Specify the channel from which you want to retrieve the OpenShift Container Platform images.
- 5
- Set this field to
trueto generate the OpenShift Update Service (OSUS) graph image. For more information, see About the OpenShift Update Service. - 6
- Specify the Operator catalog from which you want to retrieve the OpenShift Container Platform images.
- 7
- Specify the Operator packages to include in the image set. If this field is empty, all packages in the catalog are retrieved.
- 8
- Specify the channels of the Operator packages to include in the image set. You must include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command:
$ oc mirror list operators --catalog=<catalog_name> --package=<package_name>. - 9
- Specify any additional images to include in the image set.
- Follow the procedure in the "Mirroring an image set to a mirror registry" section.
- Follow the procedure in the "Configuring image registry repository mirroring" section.
5.4.1.5. Installing LVM Storage by using RHACM Copiar o linkLink copiado para a área de transferência!
To install LVM Storage on the clusters by using Red Hat Advanced Cluster Management (RHACM), you must create a Policy custom resource (CR). You can also configure the criteria to select the clusters on which you want to install LVM Storage.
The Policy CR that is created to install LVM Storage is also applied to the clusters that are imported or created after creating the Policy CR.
Prerequisites
-
You have access to the RHACM cluster using an account with
cluster-adminand Operator installation permissions. - You have dedicated disks that LVM Storage can use on each cluster.
- The cluster must be managed by RHACM.
Procedure
- Log in to the RHACM CLI using your OpenShift Container Platform credentials.
Create a namespace.
oc create ns <namespace>
$ oc create ns <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
PolicyCR YAML file:Example
PolicyCR to install and configure LVM StorageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
PolicyCR by running the following command:oc create -f <file_name> -n <namespace>
$ oc create -f <file_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upon creating the
PolicyCR, the following custom resources are created on the clusters that match the selection criteria configured in thePlacementRuleCR:-
Namespace -
OperatorGroup -
Subscription
-
5.4.2. About the LVMCluster custom resource Copiar o linkLink copiado para a área de transferência!
You can configure the LVMCluster CR to perform the following actions:
- Create LVM volume groups that you can use to provision persistent volume claims (PVCs).
- Configure a list of devices that you want to add to the LVM volume groups.
- Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group.
- Force wipe the selected devices.
After you have installed LVM Storage, you must create an LVMCluster custom resource (CR).
Example LVMCluster CR YAML file
5.4.2.1. Explanation of fields in the LVMCluster CR Copiar o linkLink copiado para a área de transferência!
The LVMCluster CR fields are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
| Contains the configuration to assign the local storage devices to the LVM volume groups. LVM Storage creates a storage class and volume snapshot class for each device class that you create. |
|
|
| Specify a name for the LVM volume group (VG). You can also configure this field to reuse a volume group that you created in the previous installation. For more information, see "Reusing a volume group from the previous LVM Storage installation". |
|
|
|
Set this field to |
|
|
|
Set this field to |
|
|
| Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. On the control-plane node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster. |
|
|
| Configure the requirements that are used to select the node. |
|
|
| Contains the configuration to perform the following actions:
For more information, see "About adding devices to a volume group". |
|
|
| Specify the device paths.
If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the |
|
|
| Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. |
|
|
| LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
To force wipe the selected devices, set this field to Warning
If this field is set to Wiping the device can lead to inconsistencies in data integrity if any of the following conditions are met:
If any of these conditions are true, do not force wipe the disk. Instead, you must manually wipe the disk. |
|
|
| Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned. Using thick-provisioned storage includes the following limitations:
|
|
|
| Specify a name for the thin pool. |
|
|
| Specify the percentage of space in the LVM volume group for creating the thin pool. By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90. |
|
|
| Specify a factor by which you can provision additional storage based on the available storage in the thin pool. For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool. To disable over-provisioning, set this field to 1. |
|
|
|
Specifies the statically calculated chunk size for the thin pool. This field is only used when the
If you do not configure this field and the For more information, see "Overview of chunk size". |
|
|
|
Specifies the policy to calculate the chunk size for the underlying volume group. You can set this field to either
If this field is set to
If this field is set to For more information, see "Limitations to configure the size of the devices used in LVM Storage". |
5.4.2.2. Limitations to configure the size of the devices used in LVM Storage Copiar o linkLink copiado para a área de transferência!
The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:
- The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).
- You can define the size of PE and LE during the physical and logical device creation.
- The default PE and LE size is 4 MB.
- If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
The following tables describe the chunk size and volume size limits for static and host configurations:
| Parameter | Value |
|---|---|
| Chunk size | 128 KiB |
| Maximum volume size | 32 TiB |
| Parameter | Minimum value | Maximum value |
|---|---|---|
| Chunk size | 64 KiB | 1 GiB |
| Volume size | Minimum size of the underlying Red Hat Enterprise Linux CoreOS (RHCOS) system. | Maximum size of the underlying RHCOS system. |
| Parameter | Value |
|---|---|
| Chunk size |
This value is based on the configuration in the |
| Maximum volume size | Equal to the maximum volume size of the underlying RHCOS system. |
| Minimum volume size | Equal to the minimum volume size of the underlying RHCOS system. |
5.4.2.3. About adding devices to a volume group Copiar o linkLink copiado para a área de transferência!
The deviceSelector field in the LVMCluster CR contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group.
You can specify the device paths in the deviceSelector.paths field, the deviceSelector.optionalPaths field, or both. If you do not specify the device paths in both the deviceSelector.paths field and the deviceSelector.optionalPaths field, LVM Storage adds the supported unused devices to the volume group (VG).
It is recommended to avoid referencing disks using symbolic naming, such as /dev/sdX, as these names may change across reboots within RHCOS. Instead, you must use stable naming schemes, such as /dev/disk/by-path/ or /dev/disk/by-id/, to ensure consistent disk identification.
With this change, you might need to adjust existing automation workflows in the cases where monitoring collects information about the install device for each node.
For more information, see the RHEL documentation.
You can add the path to the Redundant Array of Independent Disks (RAID) arrays in the deviceSelector field to integrate the RAID arrays with LVM Storage. You can create the RAID array by using the mdadm utility. LVM Storage does not support creating a software RAID.
You can create a RAID array only during an OpenShift Container Platform installation. For information on creating a RAID array, see the following sections:
- "Configuring a RAID-enabled data volume" in "Additional resources".
- Creating a software RAID on an installed system
- Replacing a failed disk in RAID
- Repairing RAID disks
You can also add encrypted devices to the volume group. You can enable disk encryption on the cluster nodes during an OpenShift Container Platform installation. After encrypting a device, you can specify the path to the LUKS encrypted device in the deviceSelector field. For information on disk encryption, see "About disk encryption" and "Configuring disk encryption and mirroring".
The devices that you want to add to the VG must be supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
LVM Storage adds the devices to the VG only if the following conditions are met:
- The device path exists.
- The device is supported by LVM Storage.
After a device is added to the VG, you cannot remove the device.
LVM Storage supports dynamic device discovery. If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices to the VG when the devices are available.
It is not recommended to add the devices to the VG through dynamic device discovery due to the following reasons:
- When you add a new device that you do not intend to add to the VG, LVM Storage automatically adds this device to the VG through dynamic device discovery.
- If LVM Storage adds a device to the VG through dynamic device discovery, LVM Storage does not restrict you from removing the device from the node. Removing or updating the devices that are already added to the VG can disrupt the VG. This can also lead to data loss and necessitate manual node remediation.
5.4.2.4. Devices not supported by LVM Storage Copiar o linkLink copiado para a área de transferência!
When you are adding the device paths in the deviceSelector field of the LVMCluster custom resource (CR), ensure that the devices are supported by LVM Storage. If you add paths to the unsupported devices, LVM Storage excludes the devices to avoid complexity in managing logical volumes.
If you do not specify any device path in the deviceSelector field, LVM Storage adds only the unused devices that it supports.
To get information about the devices, run the following command:
lsblk --paths --json -o \ NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE
$ lsblk --paths --json -o \
NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE
LVM Storage does not support the following devices:
- Read-only devices
-
Devices with the
roparameter set totrue. - Suspended devices
-
Devices with the
stateparameter set tosuspended. - ROM devices
-
Devices with the
typeparameter set torom. - LVM partition devices
-
Devices with the
typeparameter set tolvm. - Devices with invalid partition labels
-
Devices with the
partlabelparameter set tobios,boot, orreserved. - Devices with an invalid filesystem
Devices with the
fstypeparameter set to any value other thannullorLVM2_member.ImportantLVM Storage supports devices with
fstypeparameter set toLVM2_memberonly if the devices do not contain children devices.- Devices that are part of another volume group
To get the information about the volume groups of the device, run the following command:
pvs <device-name>
$ pvs <device-name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<device-name>with the device name.
- Devices with bind mounts
To get the mount points of a device, run the following command:
cat /proc/1/mountinfo | grep <device-name>
$ cat /proc/1/mountinfo | grep <device-name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<device-name>with the device name.
- Devices that contain children devices
It is recommended to wipe the device before using it in LVM Storage to prevent unexpected behavior.
5.4.3. Ways to create an LVMCluster custom resource Copiar o linkLink copiado para a área de transferência!
You can create an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also create an LVMCluster CR by using RHACM.
You must create the LVMCluster CR in the same namespace where you installed the LVM Storage Operator, which is openshift-storage by default.
Upon creating the LVMCluster CR, LVM Storage creates the following system-managed CRs:
A
storageClassandvolumeSnapshotClassfor each device class.NoteLVM Storage configures the name of the storage class and volume snapshot class in the format
lvms-<device_class_name>, where,<device_class_name>is the value of thedeviceClasses.namefield in theLVMClusterCR. For example, if thedeviceClasses.namefield is set to vg1, the name of the storage class and volume snapshot class islvms-vg1.-
LVMVolumeGroup: This CR is a specific type of persistent volume (PV) that is backed by an LVM volume group. It tracks the individual volume groups across multiple nodes. -
LVMVolumeGroupNodeStatus: This CR tracks the status of the volume groups on a node.
5.4.3.1. Reusing a volume group from the previous LVM Storage installation Copiar o linkLink copiado para a área de transferência!
You can reuse an existing volume group (VG) from the previous LVM Storage installation instead of creating a new VG.
You can only reuse a VG but not the logical volume associated with the VG.
You can perform this procedure only while creating an LVMCluster custom resource (CR).
Prerequisites
- The VG that you want to reuse must not be corrupted.
-
The VG that you want to reuse must have the
lvmstag. For more information on adding tags to LVM objects, see Grouping LVM objects with tags.
Procedure
-
Open the
LVMClusterCR YAML file. Configure the
LVMClusterCR parameters as described in the following example:Example
LVMClusterCR YAML fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set this field to the name of a VG from the previous LVM Storage installation.
- 2
- Set this field to
ext4orxfs. By default, this field is set toxfs. - 3
- You can add new devices to the VG that you want to reuse by specifying the new device paths in the
deviceSelectorfield. If you do not want to add new devices to the VG, ensure that thedeviceSelectorconfiguration in the current LVM Storage installation is same as that of the previous LVM Storage installation. - 4
- If this field is set to
true, LVM Storage wipes all the data on the devices that are added to the VG. - 5
- To retain the
thinPoolConfigconfiguration of the VG that you want to reuse, ensure that thethinPoolConfigconfiguration in the current LVM Storage installation is same as that of the previous LVM Storage installation. Otherwise, you can configure thethinPoolConfigfield as required. - 6
- Configure the requirements to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
-
Save the
LVMClusterCR YAML file.
To view the devices that are part a volume group, run the following command:
pvs -S vgname=<vg_name>
$ pvs -S vgname=<vg_name>
- 1
- Replace
<vg_name>with the name of the volume group.
5.4.3.2. Creating an LVMCluster CR by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create an LVMCluster custom resource (CR) on a worker node using the OpenShift CLI (oc).
You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to OpenShift Container Platform as a user with
cluster-adminprivileges. - You have installed LVM Storage.
- You have installed a worker node in the cluster.
- You read the "About the LVMCluster custom resource" section.
Procedure
Create an
LVMClustercustom resource (CR) YAML file:Example
LVMClusterCR YAML fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Contains the configuration to assign the local storage devices to the LVM volume groups.
- 2
- Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
- 3
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group.
- 4
- Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned.
Create the
LVMClusterCR by running the following command:oc create -f <file_name>
$ oc create -f <file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
lvmcluster/lvmcluster created
lvmcluster/lvmcluster createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
LVMClusterCR is in theReadystate:oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace>$ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the
LVMClusterCR is in theFailedstate, you can view the reason for failure in thestatusfield.Example of
statusfield with the reason for failue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To view the storage classes created by LVM Storage for each device class, run the following command:
oc get storageclass
$ oc get storageclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To view the volume snapshot classes created by LVM Storage for each device class, run the following command:
oc get volumesnapshotclass
$ oc get volumesnapshotclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h
NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.3.3. Creating an LVMCluster CR by using the web console Copiar o linkLink copiado para a área de transferência!
You can create an LVMCluster CR on a worker node using the OpenShift Container Platform web console.
You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.
Prerequisites
-
You have access to the OpenShift Container Platform cluster with
cluster-adminprivileges. - You have installed LVM Storage.
- You have installed a worker node in the cluster.
- You read the "About the LVMCluster custom resource" section.
Procedure
- Log in to the OpenShift Container Platform web console.
-
Click Operators
Installed Operators. -
In the
openshift-storagenamespace, click LVM Storage. - Click Create LVMCluster and select either Form view or YAML view.
-
Configure the required
LVMClusterCR parameters. - Click Create.
Optional: If you want to edit the
LVMCLusterCR, perform the following actions:- Click the LVMCluster tab.
- From the Actions menu, select Edit LVMCluster.
-
Click YAML and edit the required
LVMCLusterCR parameters. - Click Save.
Verification
-
On the LVMCLuster page, check that the
LVMClusterCR is in theReadystate. -
Optional: To view the available storage classes created by LVM Storage for each device class, click Storage
StorageClasses. -
Optional: To view the available volume snapshot classes created by LVM Storage for each device class, click Storage
VolumeSnapshotClasses.
5.4.3.4. Creating an LVMCluster CR by using RHACM Copiar o linkLink copiado para a área de transferência!
After you have installed LVM Storage by using RHACM, you must create an LVMCluster custom resource (CR).
Prerequisites
- You have installed LVM Storage by using RHACM.
-
You have access to the RHACM cluster using an account with
cluster-adminpermissions. - You read the "About the LVMCluster custom resource" section.
Procedure
- Log in to the RHACM CLI using your OpenShift Container Platform credentials.
Create a
ConfigurationPolicyCR YAML file with the configuration to create anLVMClusterCR:Example
ConfigurationPolicyCR YAML file to create anLVMClusterCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Contains the configuration to assign the local storage devices to the LVM volume groups.
- 2
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group.
- 3
- Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned.
- 4
- Contains the configuration to choose the nodes on which you want to create the LVM volume groups. If this field is empty, then all nodes without no-schedule taints are considered.
Create the
ConfigurationPolicyCR by running the following command:oc create -f <file_name> -n <cluster_namespace>
$ oc create -f <file_name> -n <cluster_namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.
5.4.4. Ways to delete an LVMCluster custom resource Copiar o linkLink copiado para a área de transferência!
You can delete an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also delete an LVMCluster CR by using RHACM.
Upon deleting the LVMCluster CR, LVM Storage deletes the following CRs:
-
storageClass -
volumeSnapshotClass -
LVMVolumeGroup -
LVMVolumeGroupNodeStatus
5.4.4.1. Deleting an LVMCluster CR by using the CLI Copiar o linkLink copiado para a área de transferência!
You can delete the LVMCluster custom resource (CR) using the OpenShift CLI (oc).
Prerequisites
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. - You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
Procedure
-
Log in to the OpenShift CLI (
oc). Delete the
LVMClusterCR by running the following command:oc delete lvmcluster <lvm_cluster_name> -n openshift-storage
$ oc delete lvmcluster <lvm_cluster_name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the
LVMClusterCR has been deleted, run the following command:oc get lvmcluster -n <namespace>
$ oc get lvmcluster -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
No resources found in openshift-storage namespace.
No resources found in openshift-storage namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.4.2. Deleting an LVMCluster CR by using the web console Copiar o linkLink copiado para a área de transferência!
You can delete the LVMCluster custom resource (CR) using the OpenShift Container Platform web console.
Prerequisites
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. - You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
Procedure
- Log in to the OpenShift Container Platform web console.
-
Click Operators
Installed Operators to view all the installed Operators. -
Click LVM Storage in the
openshift-storagenamespace. - Click the LVMCluster tab.
- From the Actions, select Delete LVMCluster.
- Click Delete.
Verification
-
On the
LVMCLusterpage, check that theLVMClusterCR has been deleted.
5.4.4.3. Deleting an LVMCluster CR by using RHACM Copiar o linkLink copiado para a área de transferência!
If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can delete an LVMCluster CR by using RHACM.
Prerequisites
-
You have access to the RHACM cluster as a user with
cluster-adminpermissions. - You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
Procedure
- Log in to the RHACM CLI using your OpenShift Container Platform credentials.
Delete the
ConfigurationPolicyCR YAML file that was created for theLVMClusterCR:oc delete -f <file_name> -n <cluster_namespace>
$ oc delete -f <file_name> -n <cluster_namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.
Create a
PolicyCR YAML file to delete theLVMClusterCR:Example
PolicyCR to delete theLVMClusterCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
spec.remediationActioninpolicy-templateis overridden by the preceding parameter value forspec.remediationAction. - 2
- This
namespacefield must have theopenshift-storagevalue. - 3
- Configure the requirements to select the clusters. LVM Storage is uninstalled on the clusters that match the selection criteria.
Create the
PolicyCR by running the following command:oc create -f <file_name> -n <namespace>
$ oc create -f <file_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
PolicyCR YAML file to check if theLVMClusterCR has been deleted:Example
PolicyCR to check if theLVMClusterCR has been deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
PolicyCR by running the following command:oc create -f <file_name> -n <namespace>
$ oc create -f <file_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the status of the
PolicyCRs by running the following command:oc get policy -n <namespace>
$ oc get policy -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m
NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15mCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
PolicyCRs must be inCompliantstate.
5.4.5. Provisioning storage Copiar o linkLink copiado para a área de transferência!
After you have created the LVM volume groups using the LVMCluster custom resource (CR), you can provision the storage by creating persistent volume claims (PVCs).
The following are the minimum storage sizes that you can request for each file system type:
-
block: 8 MiB -
xfs: 300 MiB -
ext4: 32 MiB
To create a PVC, you must create a PersistentVolumeClaim object.
Prerequisites
-
You have created an
LVMClusterCR.
Procedure
-
Log in to the OpenShift CLI (
oc). Create a
PersistentVolumeClaimobject:Example
PersistentVolumeClaimobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the PVC.
- 2
- To create a block PVC, set this field to
Block. To create a file PVC, set this field toFilesystem. - 3
- Specify the storage size. If the value is less than the minimum storage size, the requested storage size is rounded to the minimum storage size. The total storage size you can provision is limited by the size of the Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
- 4
- Optional: Specify the storage limit. Set this field to a value that is greater than or equal to the minimum storage size. Otherwise, PVC creation fails with an error.
- 5
- The value of the
storageClassNamefield must be in the formatlvms-<device_class_name>where<device_class_name>is the value of thedeviceClasses.namefield in theLVMClusterCR. For example, if thedeviceClasses.namefield is set tovg1, you must set thestorageClassNamefield tolvms-vg1.
NoteThe
volumeBindingModefield of the storage class is set toWaitForFirstConsumer.Create the PVC by running the following command:
oc create -f <file_name> -n <application_namespace>
# oc create -f <file_name> -n <application_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe created PVCs remain in
Pendingstate until you deploy the pods that use them.
Verification
To verify that the PVC is created, run the following command:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.6. Ways to scale up the storage of clusters Copiar o linkLink copiado para a área de transferência!
OpenShift Container Platform supports additional worker nodes for clusters on bare metal user-provisioned infrastructure. You can scale up the storage of clusters either by adding new worker nodes with available storage or by adding new devices to the existing worker nodes.
Logical Volume Manager (LVM) Storage detects and uses additional worker nodes when the nodes become active.
To add a new device to the existing worker nodes on a cluster, you must add the path to the new device in the deviceSelector field of the LVMCluster custom resource (CR).
You can add the deviceSelector field in the LVMCluster CR only while creating the LVMCluster CR. If you have not added the deviceSelector field while creating the LVMCluster CR, you must delete the LVMCluster CR and create a new LVMCluster CR containing the deviceSelector field.
If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available.
LVM Storage adds only the supported devices. For information about unsupported devices, see "Devices not supported by LVM Storage".
5.4.6.1. Scaling up the storage of clusters by using the CLI Copiar o linkLink copiado para a área de transferência!
You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift CLI (oc).
Prerequisites
- You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.
-
You have installed the OpenShift CLI (
oc). -
You have created an
LVMClustercustom resource (CR).
Procedure
Edit the
LVMClusterCR by running the following command:oc edit <lvmcluster_file_name> -n <namespace>
$ oc edit <lvmcluster_file_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the path to the new device in the
deviceSelectorfield.Example
LVMClusterCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the
pathsfield, theoptionalPathsfield, or both. If you do not specify the device paths in bothpathsandoptionalPaths, Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met:- The device path exists.
- The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
- 2
- Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the
LVMClusterCR moves to theFailedstate. - 3
- Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.Important
After a device is added to the LVM volume group, it cannot be removed.
-
Save the
LVMClusterCR.
5.4.6.2. Scaling up the storage of clusters by using the web console Copiar o linkLink copiado para a área de transferência!
You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift Container Platform web console.
Prerequisites
- You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.
-
You have created an
LVMClustercustom resource (CR).
Procedure
- Log in to the OpenShift Container Platform web console.
-
Click Operators
Installed Operators. -
Click LVM Storage in the
openshift-storagenamespace. -
Click the LVMCluster tab to view the
LVMClusterCR created on the cluster. - From the Actions menu, select Edit LVMCluster.
- Click the YAML tab.
Edit the
LVMClusterCR to add the new device path in thedeviceSelectorfield:Example
LVMClusterCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the
pathsfield, theoptionalPathsfield, or both. If you do not specify the device paths in bothpathsandoptionalPaths, Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met:- The device path exists.
- The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
- 2
- Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the
LVMClusterCR moves to theFailedstate. - 3
- Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.Important
After a device is added to the LVM volume group, it cannot be removed.
- Click Save.
5.4.6.3. Scaling up the storage of clusters by using RHACM Copiar o linkLink copiado para a área de transferência!
You can scale up the storage capacity of worker nodes on the clusters by using RHACM.
Prerequisites
-
You have access to the RHACM cluster using an account with
cluster-adminprivileges. -
You have created an
LVMClustercustom resource (CR) by using RHACM. - You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.
Procedure
- Log in to the RHACM CLI using your OpenShift Container Platform credentials.
Edit the
LVMClusterCR that you created using RHACM by running the following command:oc edit -f <file_name> -n <namespace>
$ oc edit -f <file_name> -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<file_name>with the name of theLVMClusterCR.
In the
LVMClusterCR, add the path to the new device in thedeviceSelectorfield.Example
LVMClusterCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the
pathsfield, theoptionalPathsfield, or both. If you do not specify the device paths in bothpathsandoptionalPaths, Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met:- The device path exists.
- The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
- 2
- Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the
LVMClusterCR moves to theFailedstate. - 3
- Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.Important
After a device is added to the LVM volume group, it cannot be removed.
-
Save the
LVMClusterCR.
5.4.7. Expanding a persistent volume claim Copiar o linkLink copiado para a área de transferência!
After scaling up the storage of a cluster, you can expand the existing persistent volume claims (PVCs).
To expand a PVC, you must update the storage field in the PVC.
Prerequisites
- Dynamic provisioning is used.
-
The
StorageClassobject associated with the PVC has theallowVolumeExpansionfield set totrue.
Procedure
-
Log in to the OpenShift CLI (
oc). Update the value of the
spec.resources.requests.storagefield to a value that is greater than the current value by running the following command:oc patch pvc <pvc_name> -n <application_namespace> \ --type=merge -p \ '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}'$ oc patch pvc <pvc_name> -n <application_namespace> \1 --type=merge -p \ '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}'2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that resizing is completed, run the following command:
oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}$ oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}Copy to Clipboard Copied! Toggle word wrap Toggle overflow LVM Storage adds the
Resizingcondition to the PVC during expansion. It deletes theResizingcondition after the PVC expansion.
5.4.8. Deleting a persistent volume claim Copiar o linkLink copiado para a área de transferência!
You can delete a persistent volume claim (PVC) by using the OpenShift CLI (oc).
Prerequisites
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions.
Procedure
-
Log in to the OpenShift CLI (
oc). Delete the PVC by running the following command:
oc delete pvc <pvc_name> -n <namespace>
$ oc delete pvc <pvc_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the PVC is deleted, run the following command:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The deleted PVC must not be present in the output of this command.
5.4.9. About volume snapshots Copiar o linkLink copiado para a área de transferência!
You can create snapshots of persistent volume claims (PVCs) that are provisioned by LVM Storage.
You can perform the following actions using the volume snapshots:
Back up your application data.
ImportantVolume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you must move the snapshots to a secure location. You can use OpenShift API for Data Protection (OADP) backup and restore solutions. For information about OADP, see "OADP features".
- Revert to a state at which the volume snapshot was taken.
You can also create volume snapshots of the volume clones.
5.4.9.1. Limitations for creating volume snapshots in multi-node topology Copiar o linkLink copiado para a área de transferência!
LVM Storage has the following limitations for creating volume snapshots in multi-node topology:
- Creating volume snapshots is based on the LVM thin pool capabilities.
- After creating a volume snapshot, the node must have additional storage space for further updating the original data source.
- You can create volume snapshots only on the node where you have deployed the original data source.
- Pods relying on the PVC that uses the snapshot data can be scheduled only on the node where you have deployed the original data source.
5.4.9.2. Creating volume snapshots Copiar o linkLink copiado para a área de transferência!
You can create volume snapshots based on the available capacity of the thin pool and the over-provisioning limits. To create a volume snapshot, you must create a VolumeSnapshotClass object.
Prerequisites
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. -
You ensured that the persistent volume claim (PVC) is in
Boundstate. This is required for a consistent snapshot. - You stopped all the I/O to the PVC.
Procedure
-
Log in to the OpenShift CLI (
oc). Create a
VolumeSnapshotobject:Example
VolumeSnapshotobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo get the list of available volume snapshot classes, run the following command:
oc get volumesnapshotclass
$ oc get volumesnapshotclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the volume snapshot in the namespace where you created the source PVC by running the following command:
oc create -f <file_name> -n <namespace>
$ oc create -f <file_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow LVM Storage creates a read-only copy of the PVC as a volume snapshot.
Verification
To verify that the volume snapshot is created, run the following command:
oc get volumesnapshot -n <namespace>
$ oc get volumesnapshot -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19s
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19sCopy to Clipboard Copied! Toggle word wrap Toggle overflow The value of the
READYTOUSEfield for the volume snapshot that you created must betrue.
5.4.9.3. Restoring volume snapshots Copiar o linkLink copiado para a área de transferência!
To restore a volume snapshot, you must create a persistent volume claim (PVC) with the dataSource.name field set to the name of the volume snapshot.
The restored PVC is independent of the volume snapshot and the source PVC.
Prerequisites
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. - You have created a volume snapshot.
Procedure
-
Log in to the OpenShift CLI (
oc). Create a
PersistentVolumeClaimobject with the configuration to restore the volume snapshot:Example
PersistentVolumeClaimobject to restore a volume snapshotCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the storage size of the restored PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot.
- 2
- Set this field to the value of the
storageClassNamefield in the source PVC of the volume snapshot that you want to restore. - 3
- Set this field to the name of the volume snapshot that you want to restore.
Create the PVC in the namespace where you created the volume snapshot by running the following command:
oc create -f <file_name> -n <namespace>
$ oc create -f <file_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the volume snapshot is restored, run the following command:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.9.4. Deleting volume snapshots Copiar o linkLink copiado para a área de transferência!
You can delete the volume snapshots of the persistent volume claims (PVCs).
When you delete a persistent volume claim (PVC), LVM Storage deletes only the PVC, but not the snapshots of the PVC.
Prerequisites
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. - You have ensured that the volume snpashot that you want to delete is not in use.
Procedure
-
Log in to the OpenShift CLI (
oc). Delete the volume snapshot by running the following command:
oc delete volumesnapshot <volume_snapshot_name> -n <namespace>
$ oc delete volumesnapshot <volume_snapshot_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the volume snapshot is deleted, run the following command:
oc get volumesnapshot -n <namespace>
$ oc get volumesnapshot -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The deleted volume snapshot must not be present in the output of this command.
5.4.10. About volume clones Copiar o linkLink copiado para a área de transferência!
A volume clone is a duplicate of an existing persistent volume claim (PVC). You can create a volume clone to make a point-in-time copy of the data.
5.4.10.1. Limitations for creating volume clones in multi-node topology Copiar o linkLink copiado para a área de transferência!
LVM Storage has the following limitations for creating volume clones in multi-node topology:
- Creating volume clones is based on the LVM thin pool capabilities.
- The node must have additional storage after creating a volume clone for further updating the original data source.
- You can create volume clones only on the node where you have deployed the original data source.
- Pods relying on the PVC that uses the clone data can be scheduled only on the node where you have deployed the original data source.
5.4.10.2. Creating volume clones Copiar o linkLink copiado para a área de transferência!
To create a clone of a persistent volume claim (PVC), you must create a PersistentVolumeClaim object in the namespace where you created the source PVC.
The cloned PVC has write access.
Prerequisites
-
You ensured that the source PVC is in
Boundstate. This is required for a consistent clone.
Procedure
-
Log in to the OpenShift CLI (
oc). Create a
PersistentVolumeClaimobject:Example
PersistentVolumeClaimobject to create a volume cloneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set this field to the value of the
storageClassNamefield in the source PVC. - 2
- Set this field to the
volumeModefield in the source PVC. - 3
- Specify the name of the source PVC.
- 4
- Specify the storage size for the cloned PVC. The storage size of the cloned PVC must be greater than or equal to the storage size of the source PVC.
Create the PVC in the namespace where you created the source PVC by running the following command:
oc create -f <file_name> -n <namespace>
$ oc create -f <file_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the volume clone is created, run the following command:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.10.3. Deleting volume clones Copiar o linkLink copiado para a área de transferência!
You can delete volume clones.
When you delete a persistent volume claim (PVC), LVM Storage deletes only the source persistent volume claim (PVC) but not the clones of the PVC.
Prerequisites
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions.
Procedure
-
Log in to the OpenShift CLI (
oc). Delete the cloned PVC by running the following command:
oc delete pvc <clone_pvc_name> -n <namespace>
# oc delete pvc <clone_pvc_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the volume clone is deleted, run the following command:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The deleted volume clone must not be present in the output of this command.
5.4.11. Updating LVM Storage Copiar o linkLink copiado para a área de transferência!
You can update LVM Storage to ensure compatibility with the OpenShift Container Platform version.
Prerequisites
- You have updated your OpenShift Container Platform cluster.
- You have installed a previous version of LVM Storage.
-
You have installed the OpenShift CLI (
oc). -
You have access to the cluster using an account with
cluster-adminpermissions.
Procedure
-
Log in to the OpenShift CLI (
oc). Update the
Subscriptioncustom resource (CR) that you created while installing LVM Storage by running the following command:oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{"spec":{"channel":"<update_channel>"}}'$ oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{"spec":{"channel":"<update_channel>"}}'1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<update_channel>with the version of LVM Storage that you want to install. For example,stable-4.17.
View the update events to check that the installation is complete by running the following command:
oc get events -n openshift-storage
$ oc get events -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the LVM Storage version by running the following command:
oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'$ oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
lvms-operator.v4.17
lvms-operator.v4.17Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.12. Monitoring LVM Storage Copiar o linkLink copiado para a área de transferência!
To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:
openshift.io/cluster-monitoring=true
openshift.io/cluster-monitoring=true
For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics.
5.4.12.1. Metrics Copiar o linkLink copiado para a área de transferência!
You can monitor LVM Storage by viewing the metrics.
The following table describes the topolvm metrics:
| Alert | Description |
|---|---|
|
| Indicates the percentage of data space used in the LVM thinpool. |
|
| Indicates the percentage of metadata space used in the LVM thinpool. |
|
| Indicates the size of the LVM thin pool in bytes. |
|
| Indicates the available space in the LVM volume group in bytes. |
|
| Indicates the size of the LVM volume group in bytes. |
|
| Indicates the available over-provisioned size of the LVM thin pool in bytes. |
Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.
5.4.12.2. Alerts Copiar o linkLink copiado para a área de transferência!
When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.
LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:
| Alert | Description |
|---|---|
|
| This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required. |
|
| This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required. |
|
| This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. |
|
| This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. |
|
| This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. |
|
| This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. |
5.4.13. Uninstalling LVM Storage by using the CLI Copiar o linkLink copiado para a área de transferência!
You can uninstall LVM Storage by using the OpenShift CLI (oc).
Prerequisites
-
You have logged in to
ocas a user withcluster-adminpermissions. - You deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
-
You deleted the
LVMClustercustom resource (CR).
Procedure
Get the
currentCSVvalue for the LVM Storage Operator by running the following command:oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV
$ oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSVCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
currentCSV: lvms-operator.v4.15.3
currentCSV: lvms-operator.v4.15.3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the subscription by running the following command:
oc delete subscription.operators.coreos.com lvms-operator -n <namespace>
$ oc delete subscription.operators.coreos.com lvms-operator -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
subscription.operators.coreos.com "lvms-operator" deleted
subscription.operators.coreos.com "lvms-operator" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the CSV for the LVM Storage Operator in the target namespace by running the following command:
oc delete clusterserviceversion <currentCSV> -n <namespace>
$ oc delete clusterserviceversion <currentCSV> -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<currentCSV>with thecurrentCSVvalue for the LVM Storage Operator.
Example output
clusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deleted
clusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the LVM Storage Operator is uninstalled, run the following command:
oc get csv -n <namespace>
$ oc get csv -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the LVM Storage Operator was successfully uninstalled, it does not appear in the output of this command.
5.4.14. Uninstalling LVM Storage by using the web console Copiar o linkLink copiado para a área de transferência!
You can uninstall LVM Storage using the OpenShift Container Platform web console.
Prerequisites
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. - You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
-
You have deleted the
LVMClustercustom resource (CR).
Procedure
- Log in to the OpenShift Container Platform web console.
-
Click Operators
Installed Operators. -
Click LVM Storage in the
openshift-storagenamespace. - Click the Details tab.
- From the Actions menu, select Uninstall Operator.
- Optional: When prompted, select the Delete all operand instances for this operator checkbox to delete the operand instances for LVM Storage.
- Click Uninstall.
5.4.15. Uninstalling LVM Storage installed using RHACM Copiar o linkLink copiado para a área de transferência!
To uninstall LVM Storage that you installed using RHACM, you must delete the RHACM Policy custom resource (CR) that you created for installing and configuring LVM Storage.
Prerequisites
-
You have access to the RHACM cluster as a user with
cluster-adminpermissions. - You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
-
You have deleted the
LVMClusterCR that you created using RHACM.
Procedure
-
Log in to the OpenShift CLI (
oc). Delete the RHACM
PolicyCR that you created for installing and configuring LVM Storage by using the following command:oc delete -f <policy> -n <namespace>
$ oc delete -f <policy> -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<policy>with the name of thePolicyCR YAML file.
Create a
PolicyCR YAML file with the configuration to uninstall LVM Storage:Example
PolicyCR to uninstall LVM StorageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
PolicyCR by running the following command:oc create -f <policy> -ns <namespace>
$ oc create -f <policy> -ns <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.16. Downloading log files and diagnostic information using must-gather Copiar o linkLink copiado para a área de transferência!
When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.
Procedure
Run the
must-gathercommand from the client connected to the LVM Storage cluster:oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.17 --dest-dir=<directory_name>
$ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.17 --dest-dir=<directory_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.17. Troubleshooting persistent storage Copiar o linkLink copiado para a área de transferência!
While configuring persistent storage using Logical Volume Manager (LVM) Storage, you can encounter several issues that require troubleshooting.
5.4.17.1. Investigating a PVC stuck in the Pending state Copiar o linkLink copiado para a área de transferência!
A persistent volume claim (PVC) can get stuck in the Pending state for the following reasons:
- Insufficient computing resources.
- Network problems.
- Mismatched storage class or node selector.
- No available persistent volumes (PVs).
-
The node with the PV is in the
Not Readystate.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions.
Procedure
Retrieve the list of PVCs by running the following command:
oc get pvc
$ oc get pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect the events associated with a PVC stuck in the
Pendingstate by running the following command:oc describe pvc <pvc_name>
$ oc describe pvc <pvc_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<pvc_name>with the name of the PVC. For example,lvms-vg1.
Example output
Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io "lvms-vg1" not found
Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io "lvms-vg1" not foundCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.17.2. Recovering from a missing storage class Copiar o linkLink copiado para a área de transferência!
If you encounter the storage class not found error, check the LVMCluster custom resource (CR) and ensure that all the Logical Volume Manager (LVM) Storage pods are in the Running state.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions.
Procedure
Verify that the
LVMClusterCR is present by running the following command:oc get lvmcluster -n openshift-storage
$ oc get lvmcluster -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE my-lvmcluster 65m
NAME AGE my-lvmcluster 65mCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the
LVMClusterCR is not present, create anLVMClusterCR. For more information, see "Ways to create an LVMCluster custom resource". In the
openshift-storagenamespace, check that all the LVM Storage pods are in theRunningstate by running the following command:oc get pods -n openshift-storage
$ oc get pods -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m
NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66mCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command must contain a running instance of the following pods:
-
lvms-operator vg-managerIf the
vg-managerpod is stuck while loading a configuration file, it is due to a failure to locate an available disk for LVM Storage to use. To retrieve the necessary information to troubleshoot this issue, review the logs of thevg-managerpod by running the following command:oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage
$ oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
5.4.17.3. Recovering from node failure Copiar o linkLink copiado para a área de transferência!
A persistent volume claim (PVC) can be stuck in the Pending state due to a node failure in the cluster.
To identify the failed node, you can examine the restart count of the topolvm-node pod. An increased restart count indicates potential problems with the underlying node, which might require further investigation and troubleshooting.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions.
Procedure
Examine the restart count of the
topolvm-nodepod instances by running the following command:oc get pods -n openshift-storage
$ oc get pods -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
-
If the PVC is stuck in the
Pendingstate even after you have resolved any issues with the node, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".
5.4.17.4. Recovering from disk failure Copiar o linkLink copiado para a área de transferência!
If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there can be a problem with the underlying volume or disk.
Disk and volume provisioning issues result with a generic error message such as Failed to provision volume with storage class <storage_class_name>. The generic error message is followed by a specific volume failure error message.
The following table describes the volume failure error messages:
| Error message | Description |
|---|---|
|
| Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures. |
|
| Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC. |
|
| This error indicates problems when trying to mount the volume to a node. If the disk has failed, this error can appear when a pod tries to use the PVC. |
|
| This error indicates problems when trying to unmount a volume from a node. If the disk has failed, this error can appear when a pod tries to use the PVC. |
|
|
This error can appear with storage solutions that do not support |
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions.
Procedure
Inspect the events associated with a PVC by running the following command:
oc describe pvc <pvc_name>
$ oc describe pvc <pvc_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<pvc_name>with the name of the PVC.
- Establish a direct connection to the host where the problem is occurring.
- Resolve the disk issue.
Next steps
- If the volume failure messages persist or recur even after you have resolved the issue with the disk, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".
5.4.17.5. Performing a forced clean-up Copiar o linkLink copiado para a área de transferência!
If the disk or node-related problems persist even after you have completed the troubleshooting procedures, you must perform a forced clean-up. A forced clean-up is used to address persistent issues and ensure the proper functioning of Logical Volume Manager (LVM) Storage.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions. - You have deleted all the persistent volume claims (PVCs) that were created by using LVM Storage.
- You have stopped the pods that are using the PVCs that were created by using LVM Storage.
Procedure
Switch to the
openshift-storagenamespace by running the following command:oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the
LogicalVolumecustom resources (CRs) are present by running the following command:oc get logicalvolume
$ oc get logicalvolumeCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
LogicalVolumeCRs are present, delete them by running the following command:oc delete logicalvolume <name>
$ oc delete logicalvolume <name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>with the name of theLogicalVolumeCR.
After deleting the
LogicalVolumeCRs, remove their finalizers by running the following command:oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge$ oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>with the name of theLogicalVolumeCR.
Check if the
LVMVolumeGroupCRs are present by running the following command:oc get lvmvolumegroup
$ oc get lvmvolumegroupCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
LVMVolumeGroupCRs are present, delete them by running the following command:oc delete lvmvolumegroup <name>
$ oc delete lvmvolumegroup <name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>with the name of theLVMVolumeGroupCR.
After deleting the
LVMVolumeGroupCRs, remove their finalizers by running the following command:oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge$ oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>with the name of theLVMVolumeGroupCR.
Delete any
LVMVolumeGroupNodeStatusCRs by running the following command:oc delete lvmvolumegroupnodestatus --all
$ oc delete lvmvolumegroupnodestatus --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
LVMClusterCR by running the following command:oc delete lvmcluster --all
$ oc delete lvmcluster --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow After deleting the
LVMClusterCR, remove its finalizer by running the following command:oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge$ oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>with the name of theLVMClusterCR.