Chapter 4. Scaling storage nodes
To scale the storage capacity of OpenShift Container Storage, you can do either of the following:
- Scale up storage nodes - Add storage capacity to the existing Red Hat OpenShift Container Storage worker nodes
- Scale out storage nodes - Add new worker nodes containing storage capacity
4.1. Requirements for scaling storage nodes
Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Container Storage instance:
- Supported Infrastructure and Platforms
Supported configurations
Always ensure that you have plenty of storage capacity.
If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover.
Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space.
If you do run out of storage space completely, contact Red Hat Customer Support.
4.1.1. Supported Deployments for Red Hat OpenShift Container Storage
User-provisioned infrastructure:
- Amazon Web Services (AWS)
- VMware
- Bare metal
Installer-provisioned infrastructure:
- Amazon Web Services (AWS)
4.2. Scaling up storage capacity
Depending on the type of your deployment, you can choose one of the following procedures to scale up storage capacity.
- For AWS or VMware infrastructures using dynamic or automated provisioning of storage devices, see Section 4.2.1, “Scaling up storage by adding capacity to your OpenShift Container Storage nodes on AWS or VMware infrastructure”
- For bare metal, Amazon EC2 I3, or VMware infrastructures using local storage devices, see Section 4.2.2, “Scaling up storage by adding capacity to your OpenShift Container Storage nodes using local storage devices”
4.2.1. Scaling up storage by adding capacity to your OpenShift Container Storage nodes on AWS or VMware infrastructure
Use this procedure to add storage capacity and performance to your configured Red Hat OpenShift Container Storage worker nodes.
Prerequisites
- A running OpenShift Container Storage Platform
- Administrative privileges on the OpenShift Web Console
Procedure
- Navigate to the OpenShift Web Console.
- Click on Operators on the left navigation bar.
- Select Installed Operators.
In the window, click OpenShift Container Storage Operator:
In the top navigation bar, scroll right and click Storage Cluster tab.
- The visible list should have only one item. Click (⋮) on the far right to extend the options menu.
Select Add Capacity from the options menu.
From this dialog box, you can set the requested additional capacity and the storage class. Add capacity shows the capacity selected at the time of installation and allows to add the capacity only in this increment. On AWS, the storage class should be set to gp2. On VMWare, the storage class should be set to thin.
NoteThe effectively provisioned capacity will be three times as much as what you see in the Raw Capacity field because OpenShift Container Storage uses a replica count of 3.
- Click Add. You can see the status of the storage cluster after it reaches the Ready state. You might need to wait a couple of minutes after you see the Ready state.
Verification steps
Navigate to Overview
Persistent Storage tab, then check the Capacity breakdown card. - Note that the capacity increases based on your selections.
OpenShift Container Storage does not support cluster reduction either by reducing OSDs or reducing nodes.
4.2.2. Scaling up storage by adding capacity to your OpenShift Container Storage nodes using local storage devices
Use this procedure to add storage capacity (additional storage devices) to your configured local storage based OpenShift Container Storage worker nodes on bare metal, Amazon EC2 I3, and VMware infrastructures.
Scaling up storage on Amazon EC2 I3 is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For Amazon EC2 I3 infrastructure, adding nodes is the only option for adding capacity, as deployment is done using both the available NVMe devices.
Prerequisites
- You must be logged into OpenShift Container Platform (OCP) cluster.
- You must have installed local storage operator. For information, see Installing Local Storage Operator.
- You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 2TB NVMe drive) as the original OCS StorageCluster was created with.
Procedure
To add storage capacity to OpenShift Container Platform nodes with OpenShift Container Storage installed, you need to
Find the unique
by-id
identifier for available devices that you want to add, that is, a minimum of one device per worker node. Follow procedure finding available storage devices.NoteMake sure you perform this process for all the existing nodes (minimum of 3) for which you want to add storage.
Add unique device
by-id
.$ oc edit -n local-storage localvolume local-block
Example output:
spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: cluster.ocs.openshift.io/openshift-storage operator: In values: - "" storageClassDevices: - devicePaths: - /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T7_PHLF733402P51P0GGN - /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T7_PHLF733402LM1P0GGN - /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T7_PHLF733402M21P0GGN - /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T7_PHLF733402B71P0GGN # newly added device by-id - /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T7_PHLF733402A31P0GGN # newly added device by-id - /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T7_PHLF733402Q71P0GGN # newly added device by-id storageClassName: localblock volumeMode: Block
Make sure to save the changes after editing the CR.
localvolume.local.storage.openshift.io/local-block edited
You can see that in this CR new devices using
by-id
have been added. Each device maps tonvme1n1
on one of the three worker nodes.-
nvme-INTEL_SSDPE2KX010T7_PHLF733402B71P0GGN
-
nvme-INTEL_SSDPE2KX010T7_PHLF733402A31P0GGN
-
nvme-INTEL_SSDPE2KX010T7_PHLF733402Q71P0GGN
-
Display PVs with
storageclass
name used inlocalVolume
CR.$ oc get pv | grep localblock | grep Available
Example output:
local-pv-5ee61dcc 894Gi RWO Delete Available localblock 2m35s local-pv-b1fa607a 894Gi RWO Delete Available localblock 2m27s local-pv-e971c51d 894Gi RWO Delete Available localblock 2m22s ...
There are three more available PVs of same size which will be used for new OSDs.
To expand storage capacity, increase the
count
by 1 forStorageDeviceSets
inStorageCluster
CR.$ oc edit storageclusters.ocs.openshift.io -n openshift-storage
Example output:
spec: monDataDirHostPath: /var/lib/rook storageDeviceSets: - config: {} count: 2 # <-- increase this count by 1 dataPVCTemplate: metadata: creationTimestamp: null spec: accessModes: - ReadWriteOnce resources: requests: storage: 894Gi storageClassName: localblock volumeMode: Block status: {} name: ocs-deviceset placement: {} replica: 3 resources: {} version: 4.4.0
Make sure to save the changes after editing the CR.
storagecluster.ocs.openshift.io/ocs-storagecluster edited
ImportantTo ensure that the OSDs have a guaranteed size across the nodes, the storage size for
storageDeviceSets
must be specified as less than or equal to the size of the desired PVs created on the nodes.Verify that there are three new OSDs running and their corresponding new PVCs are created.
$ oc get -n openshift-storage pods -l app=rook-ceph-osd
Example output:
NAME READY STATUS RESTARTS AGE rook-ceph-osd-0-77c4fdb758-qshw4 1/1 Running 0 1h rook-ceph-osd-1-8645c5fbb6-656ks 1/1 Running 0 1h rook-ceph-osd-2-86895b854f-r4gt6 1/1 Running 0 1h rook-ceph-osd-3-dc7f787dd-gdnsz 1/1 Running 0 10m rook-ceph-osd-4-554b5c46dd-hbf9t 1/1 Running 0 10m rook-ceph-osd-5-5cf94c4448-k94j6 1/1 Running 0 10m
In the above example, osd-3, osd-4, and osd-5 are the newly added pods to the OpenShift Container Storage cluster.
$ oc get pvc -n openshift-storage |grep localblock
Example output:
ocs-deviceset-0-0-qc29m Bound local-pv-fc5562d3 894Gi RWO localblock 1h ocs-deviceset-0-1-qdmrl Bound local-pv-b1fa607a 894Gi RWO localblock 10m ocs-deviceset-1-0-mpwmk Bound local-pv-58cdd0bc 894Gi RWO localblock 1h ocs-deviceset-1-1-85892 Bound local-pv-e971c51d 894Gi RWO localblock 10m ocs-deviceset-2-0-rll47 Bound local-pv-29d8ad8d 894Gi RWO localblock 1h ocs-deviceset-2-1-cgth2 Bound local-pv-5ee61dcc 894Gi RWO localblock 10m
In the above example, we see three new PVCs are created.
Verification steps
4.3. Scaling out storage capacity
To scale out storage capacity, you need to perform the following:
- Add a new node to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is the increment of 3 OSDs of the capacity selected during initial configuration.
- Verify that the new node is added successfully
- Scale up the storage capacity after the node is added
Depending on the type of your deployment, you can choose one of the following procedures to add a storage node:
- For AWS installer-provisioned infrastructure, see Section 4.3.1, “Adding a node on an AWS installer-provisioned infrastructure”
- For AWS or VMware user-provisioned infrastructure, see Section 4.3.2, “Adding a node on an AWS or a VMware user-provisioned infrastructure”
- For bare metal, Amazon EC2 I3, or VMware infrastructures, see Section 4.3.3, “Adding a node using a local storage device”
4.3.1. Adding a node on an AWS installer-provisioned infrastructure
Prerequisites
- You must be logged into OpenShift Container Platform (OCP) cluster.
Procedure
-
Navigate to Compute
Machine Sets. - On the machine set where you want to add nodes, select Edit Count.
- Add the amount of nodes, and click Save.
-
Click Compute
Nodes and confirm if the new node is in Ready state. Apply the OpenShift Container Storage label to the new node.
-
For the new node, Action menu (⋮)
Edit Labels. - Add cluster.ocs.openshift.io/openshift-storage and click Save.
-
For the new node, Action menu (⋮)
It is recommended to add 3 nodes each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
To verify that the new node is added, see Section 4.3.4, “Verifying the addition of a new node”.
4.3.2. Adding a node on an AWS or a VMware user-provisioned infrastructure
Prerequisites
- You must be logged into OpenShift Container Platform (OCP) cluster.
Procedure
Depending on whether you are adding a node on an AWS user provisioned infrastructure or a VMware user-provisioned infrastructure, perform the following steps:
For AWS
- Create a new AWS machine instance with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform node using the new AWS machine instance.
For VMware:
- Create a new VM on vSphere with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform worker node using the new VM.
Check for certificate signing requests (CSRs) related to OpenShift Container Storage that are in
Pending
state:$ oc get csr
Approve all required OpenShift Container Storage CSRs for the new node:
$ oc adm certificate approve <Certificate_Name>
-
Click Compute
Nodes, confirm if the new node is in Ready state. Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels -
Add
cluster.ocs.openshift.io/openshift-storage
and click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
NoteIt is recommended to add 3 nodes each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
To verify that the new node is added, see Section 4.3.4, “Verifying the addition of a new node”.
4.3.3. Adding a node using a local storage device
Use this procedure to add a node on bare metal, Amazon EC2, and VMware infrastructures.
Scaling storage nodes for Amazon EC2 infrastructure is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
Prerequisites
- You must be logged into OpenShift Container Platform (OCP) cluster.
- You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 2TB NVMe drive) as the original OCS StorageCluster was created with.
Procedure
Depending on whether you are adding a node on bare metal, Amazon EC2, or VMware infrastructure, perform the following steps:
For Amazon EC2
- Create a new Amazon EC2 I3 machine instance with the required infrastructure. See Creating a MachineSet in AWS and Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform node using the new Amazon EC2 I3 machine instance.
For VMware:
- Create a new VM on vSphere with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform worker node using the new VM.
For bare metal:
- Get a new bare metal machine with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform node using the new bare metal machine.
Check for certificate signing requests (CSRs) related to OpenShift Container Storage that are in
Pending
state:$ oc get csr
Approve all required OpenShift Container Storage CSRs for the new node:
$ oc adm certificate approve <Certificate_Name>
-
Click Compute
Nodes, confirm if the new node is in Ready state. Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels -
Add
cluster.ocs.openshift.io/openshift-storage
and click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
NoteIt is recommended to add 3 nodes each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
To verify that the new node is added, see Section 4.3.4, “Verifying the addition of a new node”.
4.3.4. Verifying the addition of a new node
Execute the following command and verify that the new node is present in the output:
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
Click Workloads
Pods, confirm that at least the following pods on the new node are in Running state: -
csi-cephfsplugin-*
-
csi-rbdplugin-*
-
4.3.5. Scaling up storage capacity
To scale up storage capacity, see Scaling up storage by adding capacity.