Chapter 2. Storage classes


The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications.

Note

Custom storage classes are not supported for external mode OpenShift Data Foundation clusters.

2.1. Creating storage classes and pools

You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it.

Prerequisites

  • Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state.

Procedure

  1. Click Storage StorageClasses.
  2. Click Create Storage Class.
  3. Enter the storage class Name and Description.
  4. Reclaim Policy is set to Delete as the default option. Use this setting.

    If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC).

  5. Volume binding mode is set to WaitForConsumer as the default option.

    If you choose the Immediate option, then the PV gets created immediately when creating the PVC.

  6. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes.
  7. Choose a Storage system for your workloads.
  8. Select an existing Storage Pool from the list or create a new pool.

    Note

    The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article.

    Create new pool
    1. Click Create New Pool.
    2. Enter Pool name.
    3. Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy.
    4. Select Enable compression if you need to compress the data.

      Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed.

    5. Click Create to create the new storage pool.
    6. Click Finish after the pool is created.
  9. Optional: Select Enable Encryption checkbox.
  10. Click Create to create the storage class.

2.2. Storage class with single replica

You can create a storage class with a single replica to be used by your applications. This avoids redundant data copies and allows resiliency management on the application level.

Warning

Enabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication. If any OSDs are lost, this feature requires very disruptive steps to recover. All applications can lose their data, and must be recreated in case of a failed OSD.

Procedure

  1. Enable the single replica feature using the following command:

    $ oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/managedResources/cephNonResilientPools/enable", "value": true }]'
  2. Verify storagecluster is in Ready state:

    $ oc get storagecluster

    Example output:

    NAME                 AGE   PHASE   EXTERNAL   CREATED AT             VERSION
    ocs-storagecluster   10m   Ready              2024-02-05T13:56:15Z   4.17.0
  3. New cephblockpools are created for each failure domain. Verify cephblockpools are in Ready state:

    $ oc get cephblockpools

    Example output:

    NAME                                          PHASE
    ocs-storagecluster-cephblockpool              Ready
    ocs-storagecluster-cephblockpool-us-east-1a   Ready
    ocs-storagecluster-cephblockpool-us-east-1b   Ready
    ocs-storagecluster-cephblockpool-us-east-1c   Ready
  4. Verify new storage classes have been created:

    $ oc get storageclass

    Example output:

    NAME                                        PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    gp2 (default)                               kubernetes.io/aws-ebs                   Delete          WaitForFirstConsumer   true                   104m
    gp2-csi                                     ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   104m
    gp3-csi                                     ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   104m
    ocs-storagecluster-ceph-non-resilient-rbd   openshift-storage.rbd.csi.ceph.com      Delete          WaitForFirstConsumer   true                   46m
    ocs-storagecluster-ceph-rbd                 openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   52m
    ocs-storagecluster-cephfs                   openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   52m
    openshift-storage.noobaa.io                 openshift-storage.noobaa.io/obc         Delete          Immediate              false                  50m
  5. New OSD pods are created; 3 osd-prepare pods and 3 additional pods. Verify new OSD pods are in Running state:

    $ oc get pods | grep osd

    Example output:

    rook-ceph-osd-0-6dc76777bc-snhnm                                  2/2     Running     0               9m50s
    rook-ceph-osd-1-768bdfdc4-h5n7k                                   2/2     Running     0               9m48s
    rook-ceph-osd-2-69878645c4-bkdlq                                  2/2     Running     0               9m37s
    rook-ceph-osd-3-64c44d7d76-zfxq9                                  2/2     Running     0               5m23s
    rook-ceph-osd-4-654445b78f-nsgjb                                  2/2     Running     0               5m23s
    rook-ceph-osd-5-5775949f57-vz6jp                                  2/2     Running     0               5m22s
    rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf       0/1     Completed   0               10m
    rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t       0/1     Completed   0               10m
    rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv       0/1     Completed   0               10m

2.2.1. Recovering after OSD lost from single replica

When using replica 1, a storage class with a single replica, data loss is guaranteed when an OSD is lost. Lost OSDs go into a failing state. Use the following steps to recover after OSD loss.

Procedure

Follow these recovery steps to get your applications running again after data loss from replica 1. You first need to identify the domain where the failing OSD is.

  1. If you know which failure domain the failing OSD is in, run the following command to get the exact replica1-pool-name required for the next steps. If you do not know where the failing OSD is, skip to step 2.

    $ oc get cephblockpools

    Example output:

    NAME                                          PHASE
    ocs-storagecluster-cephblockpool              Ready
    ocs-storagecluster-cephblockpool-us-south-1   Ready
    ocs-storagecluster-cephblockpool-us-south-2   Ready
    ocs-storagecluster-cephblockpool-us-south-3   Ready

    Copy the corresponding failure domain name for use in next steps, then skip to step 4.

  2. Find the OSD pod that is in Error state or CrashLoopBackoff state to find the failing OSD:

    $ oc get pods -n openshift-storage -l app=rook-ceph-osd  | grep 'CrashLoopBackOff\|Error'
  3. Identify the replica-1 pool that had the failed OSD.

    1. Identify the node where the failed OSD was running:

      failed_osd_id=0 #replace with the ID of the failed OSD
    2. Identify the failureDomainLabel for the node where the failed OSD was running:

      failure_domain_label=$(oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml | grep domainLabel |head -1 |awk -F':' '{print $2}')
      failure_domain_value=$”(oc get pods $failed_osd_id -oyaml |grep topology-location-zone |awk ‘{print $2}’)”

      The output shows the replica-1 pool name whose OSD is failing, for example:

      replica1-pool-name= "ocs-storagecluster-cephblockpool-$failure_domain_value”

      where $failure_domain_value is the failureDomainName.

  4. Delete the replica-1 pool.

    1. Connect to the toolbox pod:

      toolbox=$(oc get pod -l app=rook-ceph-tools -n openshift-storage -o jsonpath='{.items[*].metadata.name}')
      
      oc rsh $toolbox -n openshift-storage
    2. Delete the replica-1 pool. Note that you have to enter the replica-1 pool name twice in the command, for example:

      ceph osd pool rm <replica1-pool-name> <replica1-pool-name> --yes-i-really-really-mean-it

      Replace replica1-pool-name with the failure domain name identified earlier.

  5. Purge the failing OSD by following the steps in section "Replacing operational or failed storage devices" based on your platform in the Replacing devices guide.
  6. Restart the rook-ceph operator:

    $ oc delete pod -l rook-ceph-operator -n openshift-storage
  7. Recreate any affected applications in that avaialbity zone to start using the new pool with same name.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.