Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. Follow this deployment method to use local storage to back persistent volumes for your OpenShift Container Platform applications.
Use this section to deploy OpenShift Data Foundation on IBM Z infrastructure where OpenShift Container Platform is already installed.
2.1. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and operator installation permissions. - You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the
openshift-storage
namespace (createopenshift-storage
namespace in this case):$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infra
to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. -
Scroll or type
OpenShift Data Foundation
into the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.15.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
-
After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if the Data Foundation dashboard is available.
2.2. Installing Local Storage Operator
Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices.
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. -
Type
local storage
in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page:
-
Update channel as
stable
. - Installation mode as A specific namespace on the cluster.
- Installed Namespace as Operator recommended namespace openshift-local-storage.
- Update approval as Automatic.
-
Update channel as
- Click Install.
Verification steps
- Verify that the Local Storage Operator shows a green tick indicating successful installation.
2.3. Finding available storage devices (optional)
This step is additional information and can be skipped as the disks are automatically discovered during storage cluster creation. Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage=''
before creating Persistent Volumes (PV) for IBM Z.
Procedure
List and verify the name of the worker nodes with the OpenShift Data Foundation label.
$ oc get nodes -l=cluster.ocs.openshift.io/openshift-storage=
Example output:
NAME STATUS ROLES AGE VERSION bmworker01 Ready worker 6h45m v1.16.2 bmworker02 Ready worker 6h45m v1.16.2 bmworker03 Ready worker 6h45m v1.16.2
Log in to each worker node that is used for OpenShift Data Foundation resources and find the unique
by-id
device name for each available raw block device.$ oc debug node/<node name>
Example output:
$ oc debug node/bmworker01 Starting pod/bmworker01-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.135.71 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 500G 0 loop sda 8:0 0 120G 0 disk |-sda1 8:1 0 384M 0 part /boot `-sda4 8:4 0 119.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 119.6G 0 dm /sysroot sdb 8:16 0 500G 0 disk
In this example, for
bmworker01
, the available local device issdb
.Identify the unique ID for each of the devices selected in Step 2.
sh-4.4#ls -l /dev/disk/by-id/ | grep sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-360050763808104bc2800000000000259 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-SIBM_2145_00e020412f0aXX00 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-0x60050763808104bc2800000000000259 -> ../../sdb
In the above example, the ID for the local device
sdb
scsi-0x60050763808104bc2800000000000259
- Repeat the above step to identify the device ID for all the other nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details.
2.4. Creating OpenShift Data Foundation cluster on IBM Z
Use this procedure to create an OpenShift Data Foundation cluster on IBM Z.
Prerequisites
- Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met.
- You must have at least three worker nodes with the same storage type and size attached to each node (for example, 200 GB) to use local storage devices on IBM Z or IBM® LinuxONE.
Procedure
In the OpenShift Web Console, click Operators
Installed Operators to view all the installed operators. Ensure that the Project selected is
openshift-storage
.- Click on the OpenShift Data Foundation operator and then click Create StorageSystem.
In the Backing storage page, perform the following:
- Select the Create a new StorageClass using the local storage devices for Backing storage type option.
- Select Full Deployment for the Deployment type option.
Click Next.
ImportantYou are prompted to install the Local Storage Operator if it is not already installed. Click Install, and follow the procedure as described in Installing Local Storage Operator.
In the Create local volume set page, provide the following information:
Enter a name for the LocalVolumeSet and the StorageClass.
By default, the local volume set name appears for the storage class name. You can change the name.
Choose one of the following:
Disks on all nodes
Uses the available disks that match the selected filters on all the nodes.
Disks on selected nodes
Uses the available disks that match the selected filters only on the selected nodes.
ImportantThe flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones.
For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled.
- Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on.
If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed.
For minimum starting node requirements, see the Resource requirements section in the Planning guide.
-
From the available list of Disk Type, select
SSD/NVME
. Expand the Advanced section and set the following options:
Volume Mode
Block is selected by default.
Device Type
Select one or more device type from the dropdown list. Note: For a multi-path device, select the
Mpath
option from the dropdown exclusively.Disk Size
Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included.
Maximum Disks Limit
This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.
Click Next.
A pop-up to confirm the creation of LocalVolumeSet is displayed.
- Click Yes to continue.
In the Capacity and nodes page, configure the following:
- Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class.
- You can check the box to select Taint nodes.
- Click Next.
Optional: In the Security and network page, configure the following based on your requirement:
- To enable encryption, select Enable data encryption for block and file storage.
Choose one or both of the following Encryption level:
Cluster-wide encryption
Encrypts the entire cluster (block and file).
StorageClass encryption
Creates encrypted persistent volume (block only) using encryption enabled storage class.
Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
-
Key Management Service Provider is set to
Vault
by default. - Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>''), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:
- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide CA Certificate, Client Certificate and Client Private Key.
- Click Save.
-
Key Management Service Provider is set to
- Select Default (SDN) as Multus is not yet supported on OpenShift Data Foundation on IBM Z.
- Click Next.
- In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click Next.
In the Review and create page::
- Review the configuration details. To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
-
In the OpenShift Web Console, navigate to Installed Operators
OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources. -
Verify that
Status
ofStorageCluster
isReady
and has a green tick mark next to it.
-
In the OpenShift Web Console, navigate to Installed Operators
To verify if flexible scaling is enabled on your storage cluster, perform the following steps:
-
In the OpenShift Web Console, navigate to Installed Operators
OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster. In the YAML tab, search for the keys
flexibleScaling
inspec
section andfailureDomain
instatus
section. Ifflexible scaling
is true andfailureDomain
is set to host, flexible scaling feature is enabled.spec: flexibleScaling: true […] status: failureDomain: host
-
In the OpenShift Web Console, navigate to Installed Operators
- To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment.
Additional resources
- To expand the capacity of the initial cluster, see the Scaling Storage guide.