Chapter 8. Scaling storage of IBM Z or LinuxONE OpenShift Data Foundation cluster
8.1. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on IBM Z or LinuxONE infrastructure
You can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes.
Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on.
Prerequisites
- A running OpenShift Data Foundation Platform.
- Administrative privileges on the OpenShift Web Console.
- To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating a storage class for details.
Procedure
Add additional hardware resources with zFCP disks.
List all the disks.
$ lszdev
Example output:
TYPE ID ON PERS NAMES zfcp-host 0.0.8204 yes yes zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500407630c0b50a4:0x3002b03000000000 yes yes sdb sg1 qeth 0.0.bdd0:0.0.bdd1:0.0.bdd2 yes no encbdd0 generic-ccw 0.0.0009 yes no
A SCSI disk is represented as a
zfcp-lun
with the structure<device-id>:<wwpn>:<lun-id>
in the ID section. The first disk is used for the operating system. The device id for the new disk can be the same.Append a new SCSI disk.
$ chzdev -e 0.0.8204:0x400506630b1b50a4:0x3001301a00000000
NoteThe device ID for the new disk must be the same as the disk to be replaced. The new disk is identified with its WWPN and LUN ID.
List all the FCP devices to verify the new disk is configured.
$ lszdev zfcp-lun TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1 zfcp-lun 0.0.8204:0x400506630b1b50a4:0x3001301a00000000 yes yes sdc sg2
- Navigate to the OpenShift Web Console.
- Click Operators on the left navigation bar.
- Select Installed Operators.
- In the window, click OpenShift Data Foundation Operator.
In the top navigation bar, scroll right and click Storage Systems tab.
- Click the Action menu (⋮) next to the visible list to extend the options menu.
Select Add Capacity from the options menu.
The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3.
- Click Add.
-
To check the status, navigate to Storage
Data Foundation and verify that Storage System in the Status card has a green tick.
Verification steps
Verify the Raw Capacity card.
-
In the OpenShift Web Console, click Storage
Data Foundation. - In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
In the Block and File tab, check the Raw Capacity card.
Note that the capacity increases based on your selections.
NoteThe raw capacity does not take replication into account and shows the full capacity.
-
In the OpenShift Web Console, click Storage
Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created.
To view the state of the newly created OSDs:
-
Click Workloads
Pods from the OpenShift Web Console. Select
openshift-storage
from the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
-
Click Workloads
To view the state of the PVCs:
-
Click Storage
Persistent Volume Claims from the OpenShift Web Console. Select
openshift-storage
from the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
-
Click Storage
Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
Identify the nodes where the new OSD pods are running.
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/<OSD-pod-name>
<OSD-pod-name>
Is the name of the OSD pod.
For example:
$ oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm
Example output:
NODE compute-1
For each of the nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
$ oc debug node/<node-name>
<node-name>
Is the name of the node.
$ chroot /host
Check for the
crypt
keyword beside theocs-deviceset
names.$ lsblk
Cluster reduction is supported only with the Red Hat Support Team’s assistance.
8.2. Scaling out storage capacity on a IBM Z or LinuxONE cluster
8.2.1. Adding a node using a local storage device
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes.
Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled
OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment.
Prerequisites
- You have administrative privilege to the OpenShift Container Platform Console.
- You have a running OpenShift Data Foundation Storage Cluster.
Procedure
Depending on the type of infrastructure, perform the following steps:
- Get a new machine with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform worker node using the new machine.
Check for certificate signing requests (CSRs) that are in
Pending
state.$ oc get csr
Approve all the required CSRs for the new node.
$ oc adm certificate approve <Certificate_Name>
<Certificate_Name>
- Is the name of the CSR.
-
Click Compute
Nodes, confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storage
, and click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
Apply the OpenShift Data Foundation label to the new node.
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
<new_node_name>
- Is the name of the new node.
Click Operators
Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed.
- Click Local Storage.
Click the Local Volume Discovery tab.
-
Beside the
LocalVolumeDiscovery
, click Action menu (⋮)Edit Local Volume Discovery. -
In the YAML, add the hostname of the new node in the
values
field under the node selector. - Click Save.
-
Beside the
Click the Local Volume Sets tab.
-
Beside the
LocalVolumeSet
, click Action menu (⋮)Edit Local Volume Set. In the YAML, add the hostname of the new node in the
values
field under thenode selector
.Figure 8.1. YAML showing the addition of new hostnames
- Click Save.
-
Beside the
It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
Execute the following command the terminal and verify that the new node is present in the output:
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
On the OpenShift web console, click Workloads
Pods, confirm that at least the following pods on the new node are in Running state: -
csi-cephfsplugin-*
-
csi-rbdplugin-*
-
8.2.2. Scaling up storage capacity
To scale up storage capacity, see Scaling up storage by adding capacity.