Chapter 2. Storage classes
The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications.
Custom storage classes are not supported for external mode OpenShift Data Foundation clusters.
2.1. Creating storage classes and pools
You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it.
Prerequisites
-
Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in
Ready
state.
Procedure
-
Click Storage
StorageClasses. - Click Create Storage Class.
- Enter the storage class Name and Description.
Reclaim Policy is set to
Delete
as the default option. Use this setting.If you change the reclaim policy to
Retain
in the storage class, the persistent volume (PV) remains inReleased
state even after deleting the persistent volume claim (PVC).Volume binding mode is set to
WaitForConsumer
as the default option.If you choose the
Immediate
option, then the PV gets created immediately when creating the PVC.-
Select
RBD
orCephFS
Provisioner as the plugin for provisioning the persistent volumes. - Choose a Storage system for your workloads.
Select an existing Storage Pool from the list or create a new pool.
NoteThe 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article.
- Create new pool
- Click Create New Pool.
- Enter Pool name.
- Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy.
Select Enable compression if you need to compress the data.
Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed.
- Click Create to create the new storage pool.
- Click Finish after the pool is created.
- Optional: Select Enable Encryption checkbox.
- Click Create to create the storage class.
2.2. Storage class with single replica
You can create a storage class with a single replica to be used by your applications. This avoids redundant data copies and allows resiliency management on the application level.
Enabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication. If any OSDs are lost, this feature requires very disruptive steps to recover. All applications can lose their data, and must be recreated in case of a failed OSD.
Procedure
Enable the single replica feature using the following command:
$ oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/managedResources/cephNonResilientPools/enable", "value": true }]'
Verify
storagecluster
is inReady
state:$ oc get storagecluster
Example output:
NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 10m Ready 2024-02-05T13:56:15Z 4.17.0
New
cephblockpools
are created for each failure domain. Verifycephblockpools
are inReady
state:$ oc get cephblockpools
Example output:
NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-east-1a Ready ocs-storagecluster-cephblockpool-us-east-1b Ready ocs-storagecluster-cephblockpool-us-east-1c Ready
Verify new storage classes have been created:
$ oc get storageclass
Example output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 104m gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m ocs-storagecluster-ceph-non-resilient-rbd openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 46m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 52m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 52m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 50m
New OSD pods are created; 3 osd-prepare pods and 3 additional pods. Verify new OSD pods are in
Running
state:$ oc get pods | grep osd
Example output:
rook-ceph-osd-0-6dc76777bc-snhnm 2/2 Running 0 9m50s rook-ceph-osd-1-768bdfdc4-h5n7k 2/2 Running 0 9m48s rook-ceph-osd-2-69878645c4-bkdlq 2/2 Running 0 9m37s rook-ceph-osd-3-64c44d7d76-zfxq9 2/2 Running 0 5m23s rook-ceph-osd-4-654445b78f-nsgjb 2/2 Running 0 5m23s rook-ceph-osd-5-5775949f57-vz6jp 2/2 Running 0 5m22s rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv 0/1 Completed 0 10m
2.2.1. Recovering after OSD lost from single replica
When using replica 1, a storage class with a single replica, data loss is guaranteed when an OSD is lost. Lost OSDs go into a failing state. Use the following steps to recover after OSD loss.
Procedure
Follow these recovery steps to get your applications running again after data loss from replica 1. You first need to identify the domain where the failing OSD is.
If you know which failure domain the failing OSD is in, run the following command to get the exact
replica1-pool-name
required for the next steps. If you do not know where the failing OSD is, skip to step 2.$ oc get cephblockpools
Example output:
NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-south-1 Ready ocs-storagecluster-cephblockpool-us-south-2 Ready ocs-storagecluster-cephblockpool-us-south-3 Ready
Copy the corresponding failure domain name for use in next steps, then skip to step 4.
Find the OSD pod that is in
Error
state orCrashLoopBackoff
state to find the failing OSD:$ oc get pods -n openshift-storage -l app=rook-ceph-osd | grep 'CrashLoopBackOff\|Error'
Identify the replica-1 pool that had the failed OSD.
Identify the node where the failed OSD was running:
failed_osd_id=0 #replace with the ID of the failed OSD
Identify the failureDomainLabel for the node where the failed OSD was running:
failure_domain_label=$(oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml | grep domainLabel |head -1 |awk -F':' '{print $2}')
failure_domain_value=$”(oc get pods $failed_osd_id -oyaml |grep topology-location-zone |awk ‘{print $2}’)”
The output shows the replica-1 pool name whose OSD is failing, for example:
replica1-pool-name= "ocs-storagecluster-cephblockpool-$failure_domain_value”
where
$failure_domain_value
is the failureDomainName.
Delete the replica-1 pool.
Connect to the toolbox pod:
toolbox=$(oc get pod -l app=rook-ceph-tools -n openshift-storage -o jsonpath='{.items[*].metadata.name}') oc rsh $toolbox -n openshift-storage
Delete the replica-1 pool. Note that you have to enter the replica-1 pool name twice in the command, for example:
ceph osd pool rm <replica1-pool-name> <replica1-pool-name> --yes-i-really-really-mean-it
Replace
replica1-pool-name
with the failure domain name identified earlier.
- Purge the failing OSD by following the steps in section "Replacing operational or failed storage devices" based on your platform in the Replacing devices guide.
Restart the rook-ceph operator:
$ oc delete pod -l rook-ceph-operator -n openshift-storage
- Recreate any affected applications in that avaialbity zone to start using the new pool with same name.