Scaling storage
Instructions for scaling operations in OpenShift Data Foundation
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Jira ticket:
- Log in to the Jira.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Select Documentation in the Components field.
- Click Create at the bottom of the dialogue.
Chapter 1. Introduction to scaling storage Copy linkLink copied to clipboard!
Red Hat OpenShift Data Foundation is a highly scalable storage system. OpenShift Data Foundation allows you to scale by adding the disks in the multiple of three, or three or any number depending upon the deployment type.
- For internal (dynamic provisioning) deployment mode, you can increase the capacity by adding 3 disks at a time.
- For internal-attached (Local Storage Operator based) mode, you can deploy with less than 3 failure domains.
With flexible scale deployment enabled, you can scale up by adding any number of disks. For deployment with 3 failure domains, you will be able to scale up by adding disks in the multiple of 3.
For scaling your storage in external mode, see Red Hat Ceph Storage documentation.
You can use a maximum of twelve storage devices per node. The high number of storage devices will lead to a higher recovery time during the loss of a node. This recommendation ensures that nodes stay below the cloud provider dynamic storage device attachment limits, and limits the recovery time after node failure with local storage devices.
While scaling, you must ensure that there are enough CPU and Memory resources as per scaling requirement.
Supported storage classes by default
-
gp2-csion AWS -
thinon VMware -
managed-csion Microsoft Azure, including enabled performance plus
1.1. Supported Deployments for Red Hat OpenShift Data Foundation Copy linkLink copied to clipboard!
User-provisioned infrastructure:
- Amazon Web Services (AWS)
- VMware
- Bare metal
- IBM Power
- IBM Z or IBM® LinuxONE
Installer-provisioned infrastructure:
- Amazon Web Services (AWS)
- Microsoft Azure
- VMware
- Bare metal
Chapter 2. Requirements for scaling storage Copy linkLink copied to clipboard!
Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance:
- Platform requirements
- Resource requirements
Storage device requirements
Always ensure that you have plenty of storage capacity.
If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space completely. Full storage is very difficult to recover.
Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space.
If storage capacity reaches 85% full state, Ceph may report HEALTH_ERR and prevent IO operations. In this case, you can increase the full ratio temporarily so that cluster rebalance can take place. For steps to increase the full ratio, see Setting Ceph OSD full thresholds using the ODF CLI tool.
If you do run out of storage space completely, contact Red Hat Customer Support.
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster Copy linkLink copied to clipboard!
To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on AWS cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space.
Usable space might vary when encryption is enabled or replica 2 pools are being used.
3.1. Scaling up storage capacity on a cluster Copy linkLink copied to clipboard!
To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
- The disk should be of the same size and type as used during initial deployment.
Procedure
- Log in to the OpenShift Web Console.
- Click Storage → Data Foundation.
Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Add Capacity from the options menu.
- Select the Storage Class. Choose the storage class which you wish to use to provision new storage devices.
- Click Add.
-
To check the status, navigate to Storage → Data Foundation and verify that the
Storage Systemin the Status card has a green tick.
Verification steps
Verify the Raw Capacity card.
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
To view the state of the PVCs:
- Click Storage → Persistent Volume Claims from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <OSD-pod-name>Is the name of the OSD pod.
For example:
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NODE compute-1
NODE compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected hosts.
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>Is the name of the node.
chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check for the
cryptkeyword beside theocs-devicesetnames.lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
3.2. Scaling out storage capacity on a AWS cluster Copy linkLink copied to clipboard!
OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation.
Scaling out storage capacity can be broken down into two steps
- Adding new node
- Scaling up the storage capacity
OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes.
3.2.1. Adding a node Copy linkLink copied to clipboard!
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains.
While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled.
OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment.
3.2.1.1. Adding a node to an installer-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
- Navigate to Compute → Machine Sets.
On the machine set where you want to add nodes, select Edit Machine Count.
- Add the amount of nodes, and click Save.
- Click Compute → Nodes and confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node.
- For the new node, click Action menu (⋮) → Edit Labels.
- Add cluster.ocs.openshift.io/openshift-storage, and click Save.
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
3.2.1.2. Adding a node to an user-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
Depending on the type of infrastructure, perform the following steps:
- Get a new machine with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform worker node using the new machine.
Check for certificate signing requests (CSRs) that are in
Pendingstate.oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all the required CSRs for the new node.
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <Certificate_Name>- Is the name of the CSR.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storage, and click Save.
- From Command line interface
Apply the OpenShift Data Foundation label to the new node.
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow <new_node_name>- Is the name of the new node.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
3.2.2. Scaling up storage capacity Copy linkLink copied to clipboard!
To scale up storage capacity, see Scaling up storage capacity on a cluster.
3.3. Enabling automatic capacity scaling Copy linkLink copied to clipboard!
You can enable automatic capacity scaling on clusters deployed using dynamic storage devices. When automatic capacity scaling is enabled, additional raw capacity equivalent to the configured deployment size is automatically added to the cluster when used capacity reaches 70%. This ensures your deployment scales seamlessly to meet demand.
This option is disabled in lean profile mode, LSO deployment, and external mode deployment.
This may incur additional costs for the underlying storage.
Prerequisites
- Administrative privilege to the OpenShift Container Platform Console.
- A running OpenShift Data Foundation Storage Cluster.
IBM cloud 10iops-tier workloads are not supported.
Procedure
- Log in to the OpenShift Web Console.
- Click Storage → Data Foundation.
- Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Automatic capacity scaling from the options menu.
- In the Automatic capacity scaling page, select the Enable automatic capacity scaling for your cluster checkbox.
- Set the cluster expansion limit from the dropdown. This is the maximum the cluster can expand in the cloud. Automatic scaling is suspended if this limit is exceeded.
- Click Save changes.
Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster Copy linkLink copied to clipboard!
To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on your bare metal cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space.
Usable space might vary when encryption is enabled or replica 2 pools are being used.
4.1. Scaling up a cluster created using local storage devices Copy linkLink copied to clipboard!
To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs.
For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused.
For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled.
Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on.
Prerequisites
- Administrative privilege to the OpenShift Container Platform Console.
- A running OpenShift Data Foundation Storage Cluster.
- Make sure that the disks to be used for scaling are attached to the storage node
-
Make sure that
LocalVolumeDiscoveryandLocalVolumeSetobjects are created.
Procedure
To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter.
- In the OpenShift Web Console, click Storage → Data Foundation.
Click the Storage Systems tab.
- Click the Action menu (⋮) next to the visible list to extend the options menu.
- Select Add Capacity from the options menu.
- Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class.
- Click Add.
- To check the status, navigate to Storage → Data Foundation and verify that the Storage System in the Status card has a green tick.
Verification steps
Verify the Raw Capacity card.
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
To view the state of the PVCs:
- Click Storage → Persistent Volume Claims from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <OSD-pod-name>Is the name of the OSD pod.
For example:
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NODE compute-1
NODE compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>Is the name of the node.
chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check for the
cryptkeyword beside theocs-devicesetnames.lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
4.2. Scaling out storage capacity on a bare metal cluster Copy linkLink copied to clipboard!
OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. There is no limit on the number of nodes which can be added. However, from the technical support perspective, 2000 nodes is the limit for OpenShift Data Foundation.
Scaling out storage capacity can be broken down into two steps
- Adding new node
- Scaling up the storage capacity
OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes.
4.2.1. Adding a node Copy linkLink copied to clipboard!
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains.
While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled.
OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment.
4.2.1.1. Adding a node to an installer-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
- Navigate to Compute → Machine Sets.
On the machine set where you want to add nodes, select Edit Machine Count.
- Add the amount of nodes, and click Save.
- Click Compute → Nodes and confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node.
- For the new node, click Action menu (⋮) → Edit Labels.
- Add cluster.ocs.openshift.io/openshift-storage, and click Save.
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
4.2.1.2. Adding a node using a local storage device Copy linkLink copied to clipboard!
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes.
Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled
OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment.
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
Depending on the type of infrastructure, perform the following steps:
- Get a new machine with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform worker node using the new machine.
Check for certificate signing requests (CSRs) that are in
Pendingstate.oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all the required CSRs for the new node.
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <Certificate_Name>- Is the name of the CSR.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storage, and click Save.
- From Command line interface
Apply the OpenShift Data Foundation label to the new node.
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow <new_node_name>- Is the name of the new node.
Click Ecosystem → Installed Operators from the OpenShift Web Console.
From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed.
- Click Local Storage.
Click the Local Volume Discovery tab.
-
Beside the
LocalVolumeDiscovery, click Action menu (⋮) → Edit Local Volume Discovery. -
In the YAML, add the hostname of the new node in the
valuesfield under the node selector. - Click Save.
-
Beside the
Click the Local Volume Sets tab.
-
Beside the
LocalVolumeSet, click Action menu (⋮) → Edit Local Volume Set. In the YAML, add the hostname of the new node in the
valuesfield under thenode selector.
- Click Save.
-
Beside the
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
4.2.2. Scaling up storage capacity Copy linkLink copied to clipboard!
To scale up storage capacity, see Scaling up storage by adding capacity.
Chapter 5. Scaling storage using multiple device class in the same cluster for local storage deployments Copy linkLink copied to clipboard!
OpenShift Data Foundation supports creating multiple device classes for OSDs within the same cluster. Defining additional device classes provides flexibility in how storage devices are organized and allows you to:
- Use different types of disks on the same nodes
- Use disks of the same type on the same nodes
- Use different-sized disks of the same type on the same or different nodes
- Isolate disks of the same type across different sets of nodes
- Combine different resource types, such as local disks and SAN-based LUNs
Overview of the required tasks
To scale storage by using multiple device classes, complete the following tasks:
Prepare the disks.
Attach new disks on the same or additional nodes and ensure they can be uniquely identified by a LocalVolumeSet.
NoteUpdate the
maxSizeordisksFilterparameter of the existing LocalVolumeSet (such aslocalblock) to prevent it from claiming the new PVs.Create a new LocalVolumeSet.
Define a LocalVolumeSet resource that represents the new device class.
Attach new storage.
Attach the storage created by the new LocalVolumeSet so that it can be consumed by the cluster.
5.1. Creating a new LocalVolumeSet Copy linkLink copied to clipboard!
Use this procedure when you want to use devices of the same type but with different sizes.
Prerequisites
Update the
maxSizeparameter of the existing LocalVolumeSet (for example,localblock) to ensure that it does not claim the newly created persistent volumes (PVs).oc -n openshift-local-storage patch localvolumesets.local.storage.openshift.io localblock \ -n openshift-local-storage \ -p '{"spec": {"deviceInclusionSpec": {"maxSize": "120Gi"}}}' \ --type merge$ oc -n openshift-local-storage patch localvolumesets.local.storage.openshift.io localblock \ -n openshift-local-storage \ -p '{"spec": {"deviceInclusionSpec": {"maxSize": "120Gi"}}}' \ --type mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the existing LocalVolumeSet created during deployment might not have a
maxSizevalue. Setting the limit to120Giensures that new disks with a higher size (for example,130Gi) are claimed only by the new LocalVolumeSet and do not overlap with the existing one.- When creating the new LocalVolumeSet, plan a unique filter to identify the intended disks. You can differentiate disks by node, disk size, or disk type.
-
Add the new disks. For example, add three new SSD or NVMe disks sized
130Gi.
Procedure
- In the OpenShift Web Console, click Ecosystem → Installed Operators.
- From the Project drop‑down list, select the project where the Local Storage Operator is installed.
- Click Local Storage.
- Click the Local Volume Sets tab.
- On the Local Volume Sets page, click Create Local Volume Set.
Enter a name for the LocalVolumeSet and the associated StorageClass.
NoteThe storage class name defaults to the LocalVolumeSet name, but it can be modified.
Under Filter Disks By, select one of the following options:
- Disks on all nodes: Uses all matching disks across all nodes.
- Disks on selected nodes: Uses matching disks only on the nodes you choose.
- From Disk Type, select SSD/NVMe from the list.
Expand the Advanced section and configure the following:
- Volume Mode: Ensure that Block is selected.
- Device Type: Select one or more device types.
- Disk Size: Set the minimum and maximum disk size to include.
Maximum Disks Limit: Specify the maximum number of PVs that can be created per node.
NoteIf this field is empty, PVs are created for all matching disks.
- Click Create.
- Wait for the newly created PVs associated with the new LocalVolumeSet to become available.
Verification steps
Verify that the local volume set is created:
oc get localvolumeset -n openshift-local-storage NAME AGE localblock 16h localvolume2 43m
$ oc get localvolumeset -n openshift-local-storage NAME AGE localblock 16h localvolume2 43mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the local storage class
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the PV by waiting for it to be available and it must be using the new storage class
localvolume2:For example:
oc get pv | grep localvolume2 local-pv-14c0b1d 130Gi RWO Delete Available localvolume2 <unset> 8m55s local-pv-41d0d077 130Gi RWO Delete Available localvolume2 <unset> 7m24s local-pv-6c57a345 130Gi RWO Delete Available localvolume2 <unset> 5m4s
$ oc get pv | grep localvolume2 local-pv-14c0b1d 130Gi RWO Delete Available localvolume2 <unset> 8m55s local-pv-41d0d077 130Gi RWO Delete Available localvolume2 <unset> 7m24s local-pv-6c57a345 130Gi RWO Delete Available localvolume2 <unset> 5m4sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2. Attaching storage for a new device set Copy linkLink copied to clipboard!
Procedure
- In the OpenShift Web Console, navigate to Storage → Data Foundation → Storage Systems.
- Click the Actions menu next to the required Storage System and select Attach Storage.
- From LSO StorageClass, select the newly created local storage class.
To enable encryption for the device set, select Enable encryption on device set.
NoteEnabling or disabling OSD encryption using this option overrides the cluster‑wide encryption setting for these new OSD storage devices.
Enter a name for the Device class.
This is a label applied to OSDs to distinguish this set of devices from others (for example, by size, type, or node group). This option is available only when creating new devices of the same size on the same set of storage devices.
- Select the Volume type.
- Enter a name for the Pool.
Select the Data protection policy.
It is recommended to choose 3‑way Replication.
- Select the Data compression option to enable compression within replicas for improved storage efficiency.
- Select the Reclaim Policy.
- Select the Volume Binding Mode.
- Enter a name for the New User Storage Class.
- To generate an encryption key for each persistent volume created through this storage class, select Enable Encryption on StorageClass.
- Click Attach.
Verification steps
Verify that the PV and PVCs are in
Boundstate.For example:
oc get pv | grep localvolume2 local-pv-14c0b1d 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-0kp29f localvolume2 <unset> 31m local-pv-41d0d077 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-2vwk54 localvolume2 <unset> 30m local-pv-6c57a345 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-1255ts localvolume2 <unset> 28m
$ oc get pv | grep localvolume2 local-pv-14c0b1d 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-0kp29f localvolume2 <unset> 31m local-pv-41d0d077 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-2vwk54 localvolume2 <unset> 30m local-pv-6c57a345 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-1255ts localvolume2 <unset> 28mCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc | grep localvolume2 localvolume2-0-data-0kp29f Bound local-pv-14c0b1d 130Gi RWO localvolume2 <unset> 19m localvolume2-0-data-1255ts Bound local-pv-6c57a345 130Gi RWO localvolume2 <unset> 19m localvolume2-0-data-2vwk54 Bound local-pv-41d0d077 130Gi RWO localvolume2 <unset> 19m
$ oc get pvc | grep localvolume2 localvolume2-0-data-0kp29f Bound local-pv-14c0b1d 130Gi RWO localvolume2 <unset> 19m localvolume2-0-data-1255ts Bound local-pv-6c57a345 130Gi RWO localvolume2 <unset> 19m localvolume2-0-data-2vwk54 Bound local-pv-41d0d077 130Gi RWO localvolume2 <unset> 19mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new OSDs are created successfully and all the OSDs are running.
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the storagecluster is in
Readystate and the Ceph cluster health isOK.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the Ceph cluster, check the Ceph OSD tree to see if the new device classes are spread correctly and OSDs are
up.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the user storageclass is created.
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Attaching storage for a new device set by using disks of same type on the same nodes Copy linkLink copied to clipboard!
Use this section when creating new devices of the same size on the same set of storage devices where you need to define a custom device class.
Pools write data only to OSDs that match the selected device class. By default, all OSDs use the ssd device class, so all pools can use all OSDs. If a custom device class is set, only new pools that explicitly use that device class will write to those OSDs. Existing pools will continue using only the OSDs that match their original device class.
Prerequisites
- Ensure that the worker nodes hosting the OSDs have a sufficient number of available disks.
Ensure that an adequate number of Persistent Volumes (PVs) are available to match the number of disks that can be used for OSD provisioning.
To verify the available local block PVs, run the following command:
oc get pv | grep localblock
$ oc get pv | grep localblockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, there are three available disks, which correspond to three PVs in the Available state.
Procedure
- In the OpenShift Web Console, navigate to Storage → Data Foundation → Storage Systems.
- Click the Actions menu next to the required Storage System and select Attach Storage.
- From LSO StorageClass, select the newly created local storage class.
To enable encryption for the device set, select Enable encryption on device set.
NoteEnabling or disabling OSD encryption using this option overrides the cluster‑wide encryption setting for these new OSD storage devices.
Enter a name for the Device class.
This is a label applied to OSDs to distinguish this set of devices from others (for example, by size, type, or node group). This option is available only when creating new devices of the same size on the same set of storage devices.
- Select the Volume type.
- Enter a name for the Pool.
Select the Data protection policy.
It is recommended to choose 3‑way Replication.
- Select the Data compression option to enable compression within replicas for improved storage efficiency.
- Select the Reclaim Policy.
- Select the Volume Binding Mode.
- Enter a name for the New User Storage Class.
- To generate an encryption key for each persistent volume created through this storage class, select Enable encryption on StorageClass.
- Click Attach.
Verification steps
Verify that the PV and PVCs are in
Boundstate.For example:
oc get pv | grep localblock-1 local-pv-1217fe9b 100Gi RWO Delete Bound openshift-storage/localblock-1-0-data-28ptwg localblock <unset> 84m local-pv-7357d060 100Gi RWO Delete Bound openshift-storage/localblock-1-0-data-15td2h localblock <unset> 89m local-pv-c31f65ba 100Gi RWO Delete Bound openshift-storage/localblock-1-0-data-08bjsj localblock <unset> 82m
$ oc get pv | grep localblock-1 local-pv-1217fe9b 100Gi RWO Delete Bound openshift-storage/localblock-1-0-data-28ptwg localblock <unset> 84m local-pv-7357d060 100Gi RWO Delete Bound openshift-storage/localblock-1-0-data-15td2h localblock <unset> 89m local-pv-c31f65ba 100Gi RWO Delete Bound openshift-storage/localblock-1-0-data-08bjsj localblock <unset> 82mCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc | grep localblock-1 localblock-1-0-data-08bjsj Bound local-pv-c31f65ba 100Gi RWO localblock <unset> 79m localblock-1-0-data-15td2h Bound local-pv-7357d060 100Gi RWO localblock <unset> 79m localblock-1-0-data-28ptwg Bound local-pv-1217fe9b 100Gi RWO localblock <unset> 79m
$ oc get pvc | grep localblock-1 localblock-1-0-data-08bjsj Bound local-pv-c31f65ba 100Gi RWO localblock <unset> 79m localblock-1-0-data-15td2h Bound local-pv-7357d060 100Gi RWO localblock <unset> 79m localblock-1-0-data-28ptwg Bound local-pv-1217fe9b 100Gi RWO localblock <unset> 79mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new OSDs are created successfully and all the OSDs are running.
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the storagecluster is in
Readystate and the Ceph cluster health isOK.For example:
oc -n openshift-storage get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 4h34m Ready 2026-03-04T12:17:21Z 4.21.0
$ oc -n openshift-storage get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 4h34m Ready 2026-03-04T12:17:21Z 4.21.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-storage get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID ocs-storagecluster-cephcluster /var/lib/rook 3 4h34m Ready Cluster created successfully HEALTH_OK 0daaeaae-d8cd-45e6-9df8-f23fe3a70263
$ oc -n openshift-storage get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID ocs-storagecluster-cephcluster /var/lib/rook 3 4h34m Ready Cluster created successfully HEALTH_OK 0daaeaae-d8cd-45e6-9df8-f23fe3a70263Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the Ceph cluster, check the Ceph OSD tree to see if the new device classes are spread correctly and OSDs are
up.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the user storageclass is created.
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Scaling storage of VMware OpenShift Data Foundation cluster Copy linkLink copied to clipboard!
6.1. Scaling up storage on a dynamically provisioned VMware cluster Copy linkLink copied to clipboard!
To increase the storage capacity in a dynamically created storage cluster on a VMware user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.
Prerequisites
- Administrative privilege to the OpenShift Container Platform Console.
- A running OpenShift Data Foundation Storage Cluster.
- Make sure that the disk is of the same size and type as the disk used during initial deployment.
Procedure
- Log in to the OpenShift Web Console.
- Click Storage → Data Foundation.
Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Add Capacity from the options menu.
- Select the Storage Class. Choose the storage class which you wish to use to provision new storage devices.
- Click Add.
-
To check the status, navigate to Storage → Data Foundation and verify that
Storage Systemin the Status card has a green tick.
Verification steps
Verify the Raw Capacity card.
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
To view the state of the PVCs:
- Click Storage → Persistent Volume Claims from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <OSD-pod-name>Is the name of the OSD pod.
For example:
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NODE compute-1
NODE compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected hosts.
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>Is the name of the node.
chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check for the
cryptkeyword beside theocs-devicesetnames.lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
6.2. Scaling up a cluster created using local storage devices Copy linkLink copied to clipboard!
To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs.
For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused.
For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled.
Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on.
Prerequisites
- Administrative privilege to the OpenShift Container Platform Console.
- A running OpenShift Data Foundation Storage Cluster.
- Make sure that the disks to be used for scaling are attached to the storage node
-
Make sure that
LocalVolumeDiscoveryandLocalVolumeSetobjects are created.
Procedure
To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter.
- In the OpenShift Web Console, click Storage → Data Foundation.
Click the Storage Systems tab.
- Click the Action menu (⋮) next to the visible list to extend the options menu.
- Select Add Capacity from the options menu.
- Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class.
- Click Add.
- To check the status, navigate to Storage → Data Foundation and verify that the Storage System in the Status card has a green tick.
Verification steps
Verify the Raw Capacity card.
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
To view the state of the PVCs:
- Click Storage → Persistent Volume Claims from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <OSD-pod-name>Is the name of the OSD pod.
For example:
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NODE compute-1
NODE compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>Is the name of the node.
chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check for the
cryptkeyword beside theocs-devicesetnames.lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
6.3. Enabling automatic capacity scaling Copy linkLink copied to clipboard!
You can enable automatic capacity scaling on clusters deployed using dynamic storage devices. When automatic capacity scaling is enabled, additional raw capacity equivalent to the configured deployment size is automatically added to the cluster when used capacity reaches 70%. This ensures your deployment scales seamlessly to meet demand.
This option is disabled in lean profile mode, LSO deployment, and external mode deployment.
This may incur additional costs for the underlying storage.
Prerequisites
- Administrative privilege to the OpenShift Container Platform Console.
- A running OpenShift Data Foundation Storage Cluster.
IBM cloud 10iops-tier workloads are not supported.
Procedure
- Log in to the OpenShift Web Console.
- Click Storage → Data Foundation.
- Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Automatic capacity scaling from the options menu.
- In the Automatic capacity scaling page, select the Enable automatic capacity scaling for your cluster checkbox.
- Set the cluster expansion limit from the dropdown. This is the maximum the cluster can expand in the cloud. Automatic scaling is suspended if this limit is exceeded.
- Click Save changes.
6.4. Scaling out storage capacity on a VMware cluster Copy linkLink copied to clipboard!
6.4.1. Adding a node to an installer-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
- Navigate to Compute → Machine Sets.
On the machine set where you want to add nodes, select Edit Machine Count.
- Add the amount of nodes, and click Save.
- Click Compute → Nodes and confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node.
- For the new node, click Action menu (⋮) → Edit Labels.
- Add cluster.ocs.openshift.io/openshift-storage, and click Save.
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
6.4.2. Adding a node to an user-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
Depending on the type of infrastructure, perform the following steps:
- Get a new machine with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform worker node using the new machine.
Check for certificate signing requests (CSRs) that are in
Pendingstate.oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all the required CSRs for the new node.
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <Certificate_Name>- Is the name of the CSR.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storage, and click Save.
- From Command line interface
Apply the OpenShift Data Foundation label to the new node.
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow <new_node_name>- Is the name of the new node.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
6.4.3. Adding a node using a local storage device Copy linkLink copied to clipboard!
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes.
Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled
OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment.
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
Depending on the type of infrastructure, perform the following steps:
- Get a new machine with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform worker node using the new machine.
Check for certificate signing requests (CSRs) that are in
Pendingstate.oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all the required CSRs for the new node.
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <Certificate_Name>- Is the name of the CSR.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storage, and click Save.
- From Command line interface
Apply the OpenShift Data Foundation label to the new node.
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow <new_node_name>- Is the name of the new node.
Click Ecosystem → Installed Operators from the OpenShift Web Console.
From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed.
- Click Local Storage.
Click the Local Volume Discovery tab.
-
Beside the
LocalVolumeDiscovery, click Action menu (⋮) → Edit Local Volume Discovery. -
In the YAML, add the hostname of the new node in the
valuesfield under the node selector. - Click Save.
-
Beside the
Click the Local Volume Sets tab.
-
Beside the
LocalVolumeSet, click Action menu (⋮) → Edit Local Volume Set. In the YAML, add the hostname of the new node in the
valuesfield under thenode selector.
- Click Save.
-
Beside the
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
6.4.4. Scaling up storage capacity Copy linkLink copied to clipboard!
To scale up storage capacity:
- For dynamic storage devices, see Scaling up storage capacity on a cluster.
- For local storage devices, see Scaling up a cluster created using local storage devices
Chapter 7. Scaling storage of Microsoft Azure OpenShift Data Foundation cluster Copy linkLink copied to clipboard!
To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on Microsoft Azure cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space.
Usable space might vary when encryption is enabled or replica 2 pools are being used.
7.1. Scaling up storage capacity on a cluster Copy linkLink copied to clipboard!
To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
- The disk should be of the same size and type as used during initial deployment.
Procedure
- Log in to the OpenShift Web Console.
- Click Storage → Data Foundation.
Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Add Capacity from the options menu.
- Select the Storage Class. Choose the storage class which you wish to use to provision new storage devices.
- Click Add.
-
To check the status, navigate to Storage → Data Foundation and verify that the
Storage Systemin the Status card has a green tick.
Verification steps
Verify the Raw Capacity card.
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
To view the state of the PVCs:
- Click Storage → Persistent Volume Claims from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <OSD-pod-name>Is the name of the OSD pod.
For example:
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NODE compute-1
NODE compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected hosts.
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>Is the name of the node.
chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check for the
cryptkeyword beside theocs-devicesetnames.lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
7.2. Enabling automatic capacity scaling Copy linkLink copied to clipboard!
You can enable automatic capacity scaling on clusters deployed using dynamic storage devices. When automatic capacity scaling is enabled, additional raw capacity equivalent to the configured deployment size is automatically added to the cluster when used capacity reaches 70%. This ensures your deployment scales seamlessly to meet demand.
This option is disabled in lean profile mode, LSO deployment, and external mode deployment.
This may incur additional costs for the underlying storage.
Prerequisites
- Administrative privilege to the OpenShift Container Platform Console.
- A running OpenShift Data Foundation Storage Cluster.
IBM cloud 10iops-tier workloads are not supported.
Procedure
- Log in to the OpenShift Web Console.
- Click Storage → Data Foundation.
- Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Automatic capacity scaling from the options menu.
- In the Automatic capacity scaling page, select the Enable automatic capacity scaling for your cluster checkbox.
- Set the cluster expansion limit from the dropdown. This is the maximum the cluster can expand in the cloud. Automatic scaling is suspended if this limit is exceeded.
- Click Save changes.
7.3. Scaling out storage capacity on a Microsoft Azure cluster Copy linkLink copied to clipboard!
OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation.
Scaling out storage capacity can be broken down into two steps
- Adding new node
- Scaling up the storage capacity
OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes.
7.3.1. Adding a node to an installer-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
- Navigate to Compute → Machine Sets.
On the machine set where you want to add nodes, select Edit Machine Count.
- Add the amount of nodes, and click Save.
- Click Compute → Nodes and confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node.
- For the new node, click Action menu (⋮) → Edit Labels.
- Add cluster.ocs.openshift.io/openshift-storage, and click Save.
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
7.3.2. Scaling up storage capacity Copy linkLink copied to clipboard!
To scale up storage capacity, see Scaling up storage capacity on a cluster.
Chapter 8. Scaling storage capacity of GCP OpenShift Data Foundation cluster Copy linkLink copied to clipboard!
To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on GCP cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space.
Usable space might vary when encryption is enabled or replica 2 pools are being used.
8.1. Scaling up storage capacity on a cluster Copy linkLink copied to clipboard!
To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
- The disk should be of the same size and type as used during initial deployment.
Procedure
- Log in to the OpenShift Web Console.
- Click Storage → Data Foundation.
Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Add Capacity from the options menu.
- Select the Storage Class. Choose the storage class which you wish to use to provision new storage devices.
- Click Add.
-
To check the status, navigate to Storage → Data Foundation and verify that the
Storage Systemin the Status card has a green tick.
Verification steps
Verify the Raw Capacity card.
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
To view the state of the PVCs:
- Click Storage → Persistent Volume Claims from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <OSD-pod-name>Is the name of the OSD pod.
For example:
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NODE compute-1
NODE compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected hosts.
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>Is the name of the node.
chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check for the
cryptkeyword beside theocs-devicesetnames.lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
8.2. Enabling automatic capacity scaling Copy linkLink copied to clipboard!
You can enable automatic capacity scaling on clusters deployed using dynamic storage devices. When automatic capacity scaling is enabled, additional raw capacity equivalent to the configured deployment size is automatically added to the cluster when used capacity reaches 70%. This ensures your deployment scales seamlessly to meet demand.
This option is disabled in lean profile mode, LSO deployment, and external mode deployment.
This may incur additional costs for the underlying storage.
Prerequisites
- Administrative privilege to the OpenShift Container Platform Console.
- A running OpenShift Data Foundation Storage Cluster.
IBM cloud 10iops-tier workloads are not supported.
Procedure
- Log in to the OpenShift Web Console.
- Click Storage → Data Foundation.
- Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Automatic capacity scaling from the options menu.
- In the Automatic capacity scaling page, select the Enable automatic capacity scaling for your cluster checkbox.
- Set the cluster expansion limit from the dropdown. This is the maximum the cluster can expand in the cloud. Automatic scaling is suspended if this limit is exceeded.
- Click Save changes.
8.3. Scaling out storage capacity on a GCP cluster Copy linkLink copied to clipboard!
OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation.
Scaling out storage capacity can be broken down into two steps
- Adding new node
- Scaling up the storage capacity
OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes.
8.3.1. Adding a node Copy linkLink copied to clipboard!
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains.
While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled.
OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment.
8.3.1.1. Adding a node to an installer-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
- Navigate to Compute → Machine Sets.
On the machine set where you want to add nodes, select Edit Machine Count.
- Add the amount of nodes, and click Save.
- Click Compute → Nodes and confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node.
- For the new node, click Action menu (⋮) → Edit Labels.
- Add cluster.ocs.openshift.io/openshift-storage, and click Save.
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
8.3.2. Scaling up storage capacity Copy linkLink copied to clipboard!
To scale up storage capacity, see Scaling up storage capacity on a cluster.
Chapter 9. Scaling storage of IBM Z or IBM LinuxONE OpenShift Data Foundation cluster Copy linkLink copied to clipboard!
9.1. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on IBM Z or IBM LinuxONE infrastructure Copy linkLink copied to clipboard!
You can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.
Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on.
Prerequisites
- A running OpenShift Data Foundation Platform.
- Administrative privileges on the OpenShift Web Console.
- To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating storage classes and pools for details.
Procedure
Add additional hardware resources with zFCP disks.
List all the disks.
lszdev
$ lszdevCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A SCSI disk is represented as a
zfcp-lunwith the structure<device-id>:<wwpn>:<lun-id>in the ID section. The first disk is used for the operating system. The device id for the new disk can be the same.Append a new SCSI disk.
chzdev -e 0.0.8204:0x400506630b1b50a4:0x3001301a00000000
$ chzdev -e 0.0.8204:0x400506630b1b50a4:0x3001301a00000000Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe device ID for the new disk must be the same as the disk to be replaced. The new disk is identified with its WWPN and LUN ID.
List all the FCP devices to verify the new disk is configured.
lszdev zfcp-lun TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1 zfcp-lun 0.0.8204:0x400506630b1b50a4:0x3001301a00000000 yes yes sdc sg2
$ lszdev zfcp-lun TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1 zfcp-lun 0.0.8204:0x400506630b1b50a4:0x3001301a00000000 yes yes sdc sg2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Navigate to the OpenShift Web Console.
- Click Storage → Data Foundation.
In the top navigation bar, click Storage Systems tab.
- Click the Action menu (⋮) next to the visible list to extend the options menu.
Select Add Capacity from the options menu.
The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3.
- Click Add.
- To check the status, navigate to Storage → Data Foundation and verify that Storage System in the Status card has a green tick.
Verification steps
Verify the Raw Capacity card.
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
To view the state of the PVCs:
- Click Storage → Persistent Volume Claims from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <OSD-pod-name>Is the name of the OSD pod.
For example:
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NODE compute-1
NODE compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>Is the name of the node.
chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check for the
cryptkeyword beside theocs-devicesetnames.lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
9.2. Scaling out storage capacity on a IBM Z or IBM LinuxONE cluster Copy linkLink copied to clipboard!
9.2.1. Adding a node using a local storage device Copy linkLink copied to clipboard!
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes.
Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled
OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment.
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
Depending on the type of infrastructure, perform the following steps:
- Get a new machine with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform worker node using the new machine.
Check for certificate signing requests (CSRs) that are in
Pendingstate.oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all the required CSRs for the new node.
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <Certificate_Name>- Is the name of the CSR.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storage, and click Save.
- From Command line interface
Apply the OpenShift Data Foundation label to the new node.
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow <new_node_name>- Is the name of the new node.
Click Ecosystem → Installed Operators from the OpenShift Web Console.
From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed.
- Click Local Storage.
Click the Local Volume Discovery tab.
-
Beside the
LocalVolumeDiscovery, click Action menu (⋮) → Edit Local Volume Discovery. -
In the YAML, add the hostname of the new node in the
valuesfield under the node selector. - Click Save.
-
Beside the
Click the Local Volume Sets tab.
-
Beside the
LocalVolumeSet, click Action menu (⋮) → Edit Local Volume Set. In the YAML, add the hostname of the new node in the
valuesfield under thenode selector.
- Click Save.
-
Beside the
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
9.2.2. Scaling up storage capacity Copy linkLink copied to clipboard!
To scale up storage capacity, see Scaling up storage capacity on a cluster.
Chapter 10. Scaling storage of IBM Power OpenShift Data Foundation cluster Copy linkLink copied to clipboard!
To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on IBM Power cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space.
Usable space might vary when encryption is enabled or replica 2 pools are being used.
In order to scale up an OpenShift Data Foundation cluster which was created using local storage devices, a new disk needs to be added to the storage node. It is recommended to have the new disks of the same size as used earlier during the deployment as OpenShift Data Foundation does not support heterogeneous disks/OSD’s.
You can add storage capacity (additional storage devices) to your configured local storage based OpenShift Data Foundation worker nodes on IBM Power infrastructures.
Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on.
Prerequisites
- You must be logged into the OpenShift Container Platform cluster.
You must have installed the local storage operator. Use the following procedure:
- You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 0.5TB SSD) as the original OpenShift Data Foundation StorageCluster was created with.
Procedure
To add storage capacity to OpenShift Container Platform nodes with OpenShift Data Foundation installed, you need to
Find the available devices that you want to add, that is, a minimum of one device per worker node. You can follow the procedure for finding available storage devices in the respective deployment guide.
NoteMake sure you perform this process for all the existing nodes (minimum of 3) for which you want to add storage.
Add the additional disks to the
LocalVolumecustom resource (CR).oc edit -n openshift-local-storage localvolume localblock
$ oc edit -n openshift-local-storage localvolume localblockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure to save the changes after editing the CR.
Example output:
localvolume.local.storage.openshift.io/localblock edited
localvolume.local.storage.openshift.io/localblock editedCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can see in this CR that new devices are added.
-
sdx
-
Display the newly created Persistent Volumes (PVs) with the
storageclassname used in thelocalVolumeCR.oc get pv | grep localblock | grep Available
$ oc get pv | grep localblock | grep AvailableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
local-pv-a04ffd8 500Gi RWO Delete Available localblock 24s local-pv-a0ca996b 500Gi RWO Delete Available localblock 23s local-pv-c171754a 500Gi RWO Delete Available localblock 23s
local-pv-a04ffd8 500Gi RWO Delete Available localblock 24s local-pv-a0ca996b 500Gi RWO Delete Available localblock 23s local-pv-c171754a 500Gi RWO Delete Available localblock 23sCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Navigate to the OpenShift Web Console.
- Click Storage → Data Foundation.
In the top navigation bar, click Storage System tab.
- Click the Action menu (⋮) next to the visible list to extend the options menu.
Select Add Capacity from the options menu.
From this dialog box, set the Storage Class name to the name used in the
localVolumeCR. Available Capacity displayed is based on the local disks available in storage class.- Click Add.
- To check the status, navigate to Storage → Data Foundation and verify that the Storage System in the Status card has a green tick.
Verification steps
Verify the available Capacity.
- In the OpenShift Web Console, click Storage → Data Foundation.
-
Click the Storage Systems tab and then click on
ocs-storagecluster. Navigate to Overview → Block and File tab, then check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
To view the state of the PVCs:
- Click Storage → Persistent Volume Claims from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <OSD-pod-name>Is the name of the OSD pod.
For example:
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NODE compute-1
NODE compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>Is the name of the node.
chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check for the
cryptkeyword beside theocs-devicesetnames.lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
10.2. Scaling out storage capacity on a IBM Power cluster Copy linkLink copied to clipboard!
OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation.
Scaling out storage capacity can be broken down into two steps:
- Adding new node
- Scaling up the storage capacity
OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes.
10.2.1. Adding a node using a local storage device on IBM Power Copy linkLink copied to clipboard!
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes.
Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled
OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment.
Prerequisites
- You must be logged into the OpenShift Container Platform cluster.
- You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 2TB SSD drive) as the original OpenShift Data Foundation StorageCluster was created with.
Procedure
- Get a new IBM Power machine with the required infrastructure. See Platform requirements.
Create a new OpenShift Container Platform node using the new IBM Power machine.
Check for certificate signing requests (CSRs) that are in
Pendingstate.oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all the required CSRs for the new node.
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <Certificate_Name>- Is the name of the CSR.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Apply the OpenShift Data Foundation label to the new node.
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
<new_node_name>- Is the name of the new node.
Click Ecosystem → Installed Operators from the OpenShift Web Console.
From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed.
- Click Local Storage.
Click the Local Volume tab.
-
Beside the
LocalVolume, click Action menu (⋮) → Edit Local Volume. In the YAML, add the hostname of the new node in the
valuesfield under thenode selector.Figure 10.1. YAML showing the addition of new hostnames
- Click Save.
-
Beside the
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
10.2.2. Scaling up storage capacity Copy linkLink copied to clipboard!
To scale up storage capacity, see Scaling up storage capacity on a cluster.
Chapter 11. Scaling storage capacity of IBM FlashSystem cluster Copy linkLink copied to clipboard!
To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on a cluster using external IBM FlashSystem, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space.
Usable space might vary when encryption is enabled or replica 2 pools are being used.
11.1. Scaling up storage capacity on a cluster Copy linkLink copied to clipboard!
To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
- The disk should be of the same size and type as used during initial deployment.
Procedure
- Log in to the OpenShift Web Console.
- Click Storage → Data Foundation.
Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Add Capacity from the options menu.
- Select the Storage Class. Choose the storage class which you wish to use to provision new storage devices.
- Click Add.
-
To check the status, navigate to Storage → Data Foundation and verify that the
Storage Systemin the Status card has a green tick.
Verification steps
Verify the Raw Capacity card.
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
To view the state of the PVCs:
- Click Storage → Persistent Volume Claims from the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <OSD-pod-name>Is the name of the OSD pod.
For example:
oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NODE compute-1
NODE compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected hosts.
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>Is the name of the node.
chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check for the
cryptkeyword beside theocs-devicesetnames.lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
11.2. Scaling out storage capacity on an IBM FlashSystem cluster Copy linkLink copied to clipboard!
OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation.
Scaling out storage capacity can be broken down into two steps
- Adding new node
- Scaling up the storage capacity
OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes.
11.2.1. Adding a node Copy linkLink copied to clipboard!
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains.
While we recommend adding nodes in the multiple of three, you still get the flexibility of adding one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled.
OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment.
11.2.1.1. Adding a node to an installer-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
- Navigate to Compute → Machine Sets.
On the machine set where you want to add nodes, select Edit Machine Count.
- Add the amount of nodes, and click Save.
- Click Compute → Nodes and confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node.
- For the new node, click Action menu (⋮) → Edit Labels.
- Add cluster.ocs.openshift.io/openshift-storage, and click Save.
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
11.2.1.2. Adding a node to an user-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
Depending on the type of infrastructure, perform the following steps:
- Get a new machine with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform worker node using the new machine.
Check for certificate signing requests (CSRs) that are in
Pendingstate.oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all the required CSRs for the new node.
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <Certificate_Name>- Is the name of the CSR.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storage, and click Save.
- From Command line interface
Apply the OpenShift Data Foundation label to the new node.
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow <new_node_name>- Is the name of the new node.
Verification steps
Execute the following command in the terminal and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenShift web console, click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
openshift-storage.cephfs.csi.ceph.com-* -
openshift-storage.rbd.csi.ceph.com-*
-
11.2.2. Scaling up storage capacity Copy linkLink copied to clipboard!
To scale up storage capacity, see Scaling up a cluster.