Chapter 5. Scaling storage using multiple device class in the same cluster for local storage deployments
OpenShift Data Foundation supports creating multiple device classes for OSDs in the same cluster. The additional device classes that you create enable you to:
- Use different types of disk on same nodes
- Use different size of disk of same type on the same node or a different node
- Isolate the disks of same type to different set of nodes
- Use different resources, such as local disks and logical unit numbers (LUNs) from Storage Area Network (SAN)
To create multiple device classes in the same cluster, you need to perform the following steps:
Add disks
Attach a new disk that uniquely identifies the local volume set on same nodes or new nodes.
NoteBefore adding disks, make sure to modify the
maxSize
orDisksFilter
parameter of the existing local volume set,localblock
, so that it will not consume the newly created PVs.- Create a new local volume set
- Attach new storage
5.1. Creating a new local volume set
You can use this procedure when you want to use same type of devices with different sizes.
Prerequisites
Ensure to modify the
maxSize
parameter of the the existing local volume set,localblock
, so that the existing one does not consume the newly created PVs. For example:$ oc -n openshift-local-storage patch localvolumesets.local.storage.openshift.io localblock -n openshift-local-storage -p '{"spec": {"deviceInclusionSpec": {"maxSize": "120Gi"}}}' --type merge
In this example, the existing local volume set,
localblock
, which is created during the deployment might not have themaxSize
set. So, to make sure that the new local volume set consumes the new disk added with a higher value (130Gi) and does not intersect with the limits of the older localvolumeset,maxSize
limit is set to 120Gi for the existinglocalblock
.- While creating the new local volume set, set a unique filter for identification of disks such as different nodes, different sizes of disk, or different types.
- Add new disks. For example, add 3 new SSD/NVME disks with the size of 130Gi.
Procedure
-
Click Operators
Installed Operators from the OpenShift Web Console. - From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed.
- Click Local Storage.
- Click the Local Volume Sets tab.
- In the Local Volume Sets page click Create Local Volume Set button.
Enter a name for the Local Volume Set and the Storage Class.
By default, the local volume set name appears for the storage class name. You can change the name.
Choose one of the following for Filter Disks By:
Disks on all nodes
Uses the available disks that match the selected filters on all the nodes.
Disks on selected nodes
Uses the available disks that match the selected filters only on the selected nodes.
-
From the available list of Disk Type, select
SSD/NVMe
. Expand the Advanced section and set the following options:
- Volume Mode
- Ensure that Block is selected for Volume Mode.
- Device Type
- Select one or more device types from the dropdown list.
- Disk Size
- Set a minimum size for the device and maximum available size of the device that needs to be included.
- Maximum Disks Limit
- This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.
- Click Create.
- Wait for the newly created PVs in the new local volume set to be available.
Verification steps
Verify that the local volume set is created:
$ oc get localvolumeset -n openshift-local-storage NAME AGE localblock 16h localvolume2 43m
Verify the local storage class
oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 15h localvolume2 kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 27m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 15h [...]
Verify the PV by waiting for it to be available and it must be using the new storage class
localvolume2
:For example:
$ oc get pv | grep localvolume2 local-pv-14c0b1d 130Gi RWO Delete Available localvolume2 <unset> 8m55s local-pv-41d0d077 130Gi RWO Delete Available localvolume2 <unset> 7m24s local-pv-6c57a345 130Gi RWO Delete Available localvolume2 <unset> 5m4s
5.2. Attaching storage for new device set
Procedure
-
In the OpenShift Web Console, navigate to Storage
Data Foundation Storage Systems tab. - Click the Action menu next to the required Storage System and select Attach Storage.
- Select the newly created local storage class from the LSO StorageClass.
Select Enable encryption on device set to enable encryption.
NoteEnabling or disabling OSD encryption using this option overrides the cluster-wide encryption setting for these new OSD storage devices.
- Select the Volume type.
- Enter a name for the pool.
Select the Data protection policy.
It is recommended to choose
3-way Replication
.- Select the Compression option to optimize storage efficiency by enabling data compression within replicas.
- Select the Reclaim Policy.
- Select the Volume binding Mode.
- Enter a name for the New User Storage Class name.
- Select Enable Encryption option to generate an encryption key for each persistent volume created using the storage class.
- Click Attach Storage.
Verification steps
Verify that the PV and PVCs are in
Bound
state.For example:
$ oc get pv | grep localvolume2 local-pv-14c0b1d 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-0kp29f localvolume2 <unset> 31m local-pv-41d0d077 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-2vwk54 localvolume2 <unset> 30m local-pv-6c57a345 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-1255ts localvolume2 <unset> 28m
$ oc get pvc | grep localvolume2 localvolume2-0-data-0kp29f Bound local-pv-14c0b1d 130Gi RWO localvolume2 <unset> 19m localvolume2-0-data-1255ts Bound local-pv-6c57a345 130Gi RWO localvolume2 <unset> 19m localvolume2-0-data-2vwk54 Bound local-pv-41d0d077 130Gi RWO localvolume2 <unset> 19m
Verify that the new OSDs are created successfully and all the OSDs are running.
For example:
$ oc -n openshift-storage get pods -l app=rook-ceph-osd NAME READY STATUS RESTARTS AGE rook-ceph-osd-0-7899b89478-bmg2l 2/2 Running 2 16h rook-ceph-osd-1-6d7df8dbfc-z29jx 2/2 Running 2 16h rook-ceph-osd-2-66b9dc8cd7-vd77d 2/2 Running 2 16h rook-ceph-osd-3-79b59f44d7-cb9m7 2/2 Running 0 15m rook-ceph-osd-4-766bcd8646-z4zvz 2/2 Running 0 15m rook-ceph-osd-5-845554d78-qqcm4 2/2 Running 0 15m
Verify that the storagecluster is in
Ready
state and the Ceph cluster health isOK
.For example:
$ oc -n openshift-storage get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 2d2h Ready 2025-02-11T12:03:37Z 4.18.0 $ oc -n openshift-storage get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID ocs-storagecluster-cephcluster /var/lib/rook 3 2d2h Ready Cluster created successfully HEALTH_OK 8ec4e898-5386-42c7-b01b-169fe8f08ba4
In the Ceph cluster, check the Ceph OSD tree to see if the new device classes are spread correctly and OSDs are
up
.For example:
$ oc rsh -n openshift-storage $(oc get pods -o wide -n openshift-storage|grep tool|awk '{print$1}') ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.67406 root default -7 0.22469 host compute-0 3 localvolume2 0.12700 osd.3 up 1.00000 1.00000 0 ssd 0.09769 osd.0 up 1.00000 1.00000 -3 0.22469 host compute-1 4 localvolume2 0.12700 osd.4 up 1.00000 1.00000 2 ssd 0.09769 osd.2 up 1.00000 1.00000 -5 0.22469 host compute-2 5 localvolume2 0.12700 osd.5 up 1.00000 1.00000 1 ssd 0.09769 osd.1 up 1.00000 1.00000
Verify that the user storageclass is created.
For example:
$ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 16h localvolume2 kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 63m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 16h ocs-storagecluster-ceph-rgw openshift-storage.ceph.rook.io/bucket Delete Immediate false 16h ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 16h openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 16h ssd2 openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 16m thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer true 17h thin-csi-odf csi.vsphere.vmware.com Delete WaitForFirstConsumer true 16h