Chapter 9. Scaling storage nodes
To scale the storage capacity of OpenShift Data Foundation, you can do either of the following:
- Scale up storage nodes - Add storage capacity to the existing OpenShift Data Foundation worker nodes
- Scale out storage nodes - Add new worker nodes containing storage capacity
9.1. Requirements for scaling storage nodes
Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance:
- Platform requirements
Storage device requirements
Always ensure that you have plenty of storage capacity.
If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover.
Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space.
If you do run out of storage space completely, contact Red Hat Customer Support.
9.2. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on Google Cloud infrastructure
You can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.
Prerequisites
- A running OpenShift Data Foundation Platform.
- Administrative privileges on the OpenShift Web Console.
- To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating a storage class for details.
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
Installed Operators. - Click OpenShift Data Foundation Operator.
Click the Storage Systems tab.
- Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu.
- Select Add Capacity from the options menu.
- Select the Storage Class.
Set the storage class to standard
if you are using the default storage class that uses HDD. However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class.
+ The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3.
Click Add.
-
To check the status, navigate to Storage
OpenShift Data Foundation and verify that Storage System
in the Status card has a green tick.
-
To check the status, navigate to Storage
Verification steps
Verify the Raw Capacity card.
-
In the OpenShift Web Console, click Storage
OpenShift Data Foundation. - In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
-
In the OpenShift Web Console, click Storage
Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
-
Click Workloads
Pods from the OpenShift Web Console. Select
openshift-storage
from the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
-
Click Workloads
To view the state of the PVCs:
-
Click Storage
Persistent Volume Claims from the OpenShift Web Console. Select
openshift-storage
from the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
-
Click Storage
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
$ oc get -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
<OSD-pod-name>
Is the name of the OSD pod.
For example:
oc get -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected hosts.
$ oc debug node/<node-name>
<node-name>
Is the name of the node.
$ chroot /host
Check for the
crypt
keyword beside theocs-deviceset
names.$ lsblk
Cluster reduction is supported only with the Red Hat Support Team’s assistance..
9.3. Scaling out storage capacity by adding new nodes
To scale out storage capacity, you need to perform the following:
- Add a new node to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is the increment of 3 OSDs of the capacity selected during initial configuration.
- Verify that the new node is added successfully
- Scale up the storage capacity after the node is added
9.3.1. Adding a node on Google Cloud installer-provisioned infrastructure
Prerequisites
- You must be logged into OpenShift Container Platform cluster.
Procedure
-
Navigate to Compute
Machine Sets. On the machine set where you want to add nodes, select Edit Machine Count.
- Add the amount of nodes, and click Save.
-
Click Compute
Nodes and confirm if the new node is in Ready state.
Apply the OpenShift Data Foundation label to the new node.
-
For the new node, click Action menu (⋮)
Edit Labels. - Add cluster.ocs.openshift.io/openshift-storage, and click Save.
-
For the new node, click Action menu (⋮)
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
- To verify that the new node is added, see Verifying the addition of a new node.
9.3.2. Verifying the addition of a new node
Execute the following command and verify that the new node is present in the output:
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
Click Workloads
Pods, confirm that at least the following pods on the new node are in Running state: -
csi-cephfsplugin-*
-
csi-rbdplugin-*
-
9.3.3. Scaling up storage capacity
After you add a new node to OpenShift Data Foundation, you must scale up the storage capacity as described in Scaling up storage by adding capacity.