Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster


To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.

You can scale up storage capacity of an AWS Red Hat OpenShift Data Foundation cluster in two ways:

To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space.

Note

Usable space might vary when encryption is enabled or replica 2 pools are being used. To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.

Prerequisites

  • You have administrative privilege to the OpenShift Container Platform Console.
  • You have a running OpenShift Data Foundation Storage Cluster.
  • The disk should be of the same size and type as used during initial deployment.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators Installed Operators.
  3. Click OpenShift Data Foundation Operator.
  4. Click the Storage Systems tab.

    1. Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
    2. Select Add Capacity from the options menu.
    3. Select the Storage Class. Choose the storage class which you wish to use to provision new storage devices.
    4. Click Add.
  5. To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick.

Verification steps

  • Verify the Raw Capacity card.

    1. In the OpenShift Web Console, click Storage Data Foundation.
    2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
    3. In the Block and File tab, check the Raw Capacity card.

      Note that the capacity increases based on your selections.

      Note

      The raw capacity does not take replication into account and shows the full capacity.

  • Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created.

    • To view the state of the newly created OSDs:

      1. Click Workloads Pods from the OpenShift Web Console.
      2. Select openshift-storage from the Project drop-down list.

        Note

        If the Show default projects option is disabled, use the toggle button to list all the default projects.

    • To view the state of the PVCs:

      1. Click Storage Persistent Volume Claims from the OpenShift Web Console.
      2. Select openshift-storage from the Project drop-down list.

        Note

        If the Show default projects option is disabled, use the toggle button to list all the default projects.

  • Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. Identify the nodes where the new OSD pods are running.

      $ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
      Copy to Clipboard Toggle word wrap
      <OSD-pod-name>

      Is the name of the OSD pod.

      For example:

      $ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
      Copy to Clipboard Toggle word wrap

      Example output:

      NODE
      compute-1
      Copy to Clipboard Toggle word wrap
    2. For each of the nodes identified in the previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected hosts.

        $ oc debug node/<node-name>
        Copy to Clipboard Toggle word wrap
        <node-name>

        Is the name of the node.

        $ chroot /host
        Copy to Clipboard Toggle word wrap
      2. Check for the crypt keyword beside the ocs-deviceset names.

        $ lsblk
        Copy to Clipboard Toggle word wrap
Important

Cluster reduction is supported only with the Red Hat Support Team’s assistance.

To increase the storage capacity on a cluster, you can add storage capacity by resizing existing OSDs.

Important

Resizing existing OSDs can only be done once every 6 hours due to AWS limitations. If you exceed this amount within the 6 hour time frame, you will receive a warning that includes the note You’ve reached the maximum modification rate per volume limit. Wait at least 6 hours between modifications per EBS volume.

Prerequisites

  • You have administrative privilege to the OpenShift Container Platform Console.
  • You have a running OpenShift Data Foundation Storage Cluster.

Procedure

  1. Update the dataPVCTemplate size for the storageDeviceSets with the new desired size using the oc patch command.

      storageDeviceSets:
      - name: example-deviceset
        count: 3
        resources: {}
        placement: {}
        dataPVCTemplate:
          spec:
            storageClassName:
            accessModes:
            - ReadWriteOnce
            volumeMode: Block
            resources:
              requests:
                storage: 512Gi
    Copy to Clipboard Toggle word wrap

    In this example YAML, the storage parameter under storageDeviceSets reflects the current size of 512Gi.

    1. Using the oc patch command:

      Get the current OSD storage for the storageDeviceSets you are increasing storage for:

       oc get storagecluster ocs-storagecluster -n openshift-storage -o jsonpath='
      {.spec.storageDeviceSets[0].dataPVCTemplate.spec.resources.requests.storage}
      '
      
      512Gi
      Copy to Clipboard Toggle word wrap
    2. Increase the storage with the desired value (the following example reflect the size change of 2Ti):

      oc patch storagecluster ocs-storagecluster -n openshift-storage --type merge --patch "$(oc get storagecluster ocs-storagecluster -n openshift-storage -o jsonpath='
      
      {.spec.storageDeviceSets[0]}
      ' | jq '.dataPVCTemplate.spec.resources.requests.storage="2Ti"' | jq -c '{spec: {storageDeviceSets: [.]}}')"
      
      storagecluster.ocs.openshift.io/ocs-storagecluster patched
      Copy to Clipboard Toggle word wrap
  2. Wait for the OSDs to restart.
  3. Confirm that the resize took effect:

    $ oc get pvc -l ceph.rook.io/DeviceSet -n openshift-storage
    Copy to Clipboard Toggle word wrap

    Verify that for all the resized OSDs, resize is completed and reflected correctly in the CAPACITY column of the command output.

  4. If the resize did not take effect, restart the OSD pods again. It may take multiple restarts for the resize to complete.

3.2. Scaling out storage capacity on a AWS cluster

OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation.

Scaling out storage capacity can be broken down into two steps

  • Adding new node
  • Scaling up the storage capacity
Note

OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes.

3.2.1. Adding a node

You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains.

While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled.

Note

OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment.

Prerequisites

  • You have administrative privilege to the OpenShift Container Platform Console.
  • You have a running OpenShift Data Foundation Storage Cluster.

Procedure

  1. Navigate to Compute Machine Sets.
  2. On the machine set where you want to add nodes, select Edit Machine Count.

    1. Add the amount of nodes, and click Save.
    2. Click Compute Nodes and confirm if the new node is in Ready state.
  3. Apply the OpenShift Data Foundation label to the new node.

    1. For the new node, click Action menu (⋮) Edit Labels.
    2. Add cluster.ocs.openshift.io/openshift-storage, and click Save.
Note

It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster.

Verification steps

  1. Execute the following command in the terminal and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
    Copy to Clipboard Toggle word wrap
  2. On the OpenShift web console, click Workloads Pods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*

To scale up storage capacity:

Prerequisites

  • You have administrative privilege to the OpenShift Container Platform Console.
  • You have a running OpenShift Data Foundation Storage Cluster.

Procedure

  1. Depending on the type of infrastructure, perform the following steps:

    1. Get a new machine with the required infrastructure. See Platform requirements.
    2. Create a new OpenShift Container Platform worker node using the new machine.
  2. Check for certificate signing requests (CSRs) that are in Pending state.

    $ oc get csr
    Copy to Clipboard Toggle word wrap
  3. Approve all the required CSRs for the new node.

    $ oc adm certificate approve <Certificate_Name>
    Copy to Clipboard Toggle word wrap
    <Certificate_Name>
    Is the name of the CSR.
  4. Click Compute Nodes, confirm if the new node is in Ready state.
  5. Apply the OpenShift Data Foundation label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮) Edit Labels.
    2. Add cluster.ocs.openshift.io/openshift-storage, and click Save.
    From Command line interface
    • Apply the OpenShift Data Foundation label to the new node.

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
      Copy to Clipboard Toggle word wrap
      <new_node_name>
      Is the name of the new node.

Verification steps

  1. Execute the following command in the terminal and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
    Copy to Clipboard Toggle word wrap
  2. On the OpenShift web console, click Workloads Pods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*

To scale up storage capacity:

3.2.2. Scaling up storage capacity

To scale up storage capacity, see Scaling up storage capacity on a cluster.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat