Deploying OpenShift Data Foundation in external mode
Instructions for deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem.
Abstract
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Jira ticket:
- Log in to the Jira.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Select Documentation in the Components field.
- Click Create at the bottom of the dialogue.
Chapter 1. Overview of deploying in external mode
Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on any platform.
See Planning your deployment for more information.
For instructions regarding how to install a RHCS cluster, see the installation guide.
Follow these steps to deploy OpenShift Data Foundation in external mode:
1.1. Disaster recovery requirements
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution:
- A valid Red Hat OpenShift Data Foundation Advanced subscription
- A valid Red Hat Advanced Cluster Management for Kubernetes subscription
For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
The following features related to external mode are not supported when using Metro-DR:
- StorageClasses using non-default RADOS namespace
- User created StorageClasses, even when using default RADOS namespace
- Multiple StorageClasses
1.2. Network ports required between OpenShift Container Platform and Ceph when using external mode deployment
List of TCP ports, source OpenShift Container Platform and destination RHCS
| TCP ports | To be used for | 
|---|---|
| 6789, 3300 | Ceph Monitor | 
| 6800 - 7300 | Ceph OSD, MGR, MDS | 
| 9283 | Ceph MGR Prometheus Exporter | 
For more information about why these ports are required, see Chapter 2. Ceph network configuration of RHCS Configuration Guide.
Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage
Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for the external Ceph storage system.
2.1. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
- 
						Access to an OpenShift Container Platform cluster using an account with cluster-adminand operator installation permissions.
- For additional resource requirements, see the Planning your deployment guide.
- When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the - openshift-storagenamespace (create- openshift-storagenamespace in this case):- oc annotate namespace openshift-storage openshift.io/node-selector= - $ oc annotate namespace openshift-storage openshift.io/node-selector=- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
- 
						Scroll or type OpenShift Data Foundationinto the Filter by keyword box to find the OpenShift Data Foundation Operator.
- Click Install.
- Set the following options on the Install Operator page: - Update Channel as stable-4.19.
- Installation Mode as A specific namespace on the cluster.
- 
								Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storagedoes not exist, it is created during the operator installation.
- Select Approval Strategy as Automatic or Manual. - If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. - If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. 
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
 
Verification steps
- 
						After the operator is successfully installed, a pop-up with a message, Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
- In the Web Console: - Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if the Data Foundation dashboard is available.
 
2.2. Creating an OpenShift Data Foundation Cluster for external Ceph storage system
You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on VMware vSphere or user-provisioned bare metal infrastructures.
Prerequisites
- A valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
- Ensure the OpenShift Container Platform version is 4.19 or above before deploying OpenShift Data Foundation 4.19.
- OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub.
- To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker. - 
								Select Service Type as ODF as Self-Managed Service.
- Select appropriate Version from the drop down.
- On the Versions tab, click the Supported RHCS versions in the External Mode tab.
 
- 
								Select Service Type as 
- Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access.
- It is recommended that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation.
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation → Storage Systems → Create StorageSystem.
- In the Backing storage page, select the following options: - Select Full deployment for the Deployment type option.
- Select Connect an external storage platform from the available options.
- Select Red Hat Ceph Storage for Storage platform.
- Click Next.
 
- In the Connection details page, provide the necessary information: - Click on the Download Script link to download the python script for extracting Ceph cluster details.
- For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with the - admin key.- Run the following command on the RHCS node to view the list of available arguments: - python3 ceph-external-cluster-details-exporter.py --help - # python3 ceph-external-cluster-details-exporter.py --help- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Important- Use - pythoninstead of- python3if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster.- You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment). Note- Use the - yum install cephadmcommand and then the- cephadmcommand to deploy your RHCS cluster using containers. You must pull the RHCS container images using the- cephadmcommand, rather than using- yumfor installing the Ceph packages onto nodes. For more information, see RHCS product documentation.
- To retrieve the external cluster details from the RHCS cluster, choose one of the following two options, either a configuration file or command-line flags. - Configuration file - Use - config-fileflag. This stores the parameters used during deployment.- In new deployments, you can save the parameters used during deployment in a configuration file. This file can then be used during upgrade to preserve the parameters as well as adding any additional parameters. Use - config-fileto set the path to the configuration file.- Example configuration file saved in - /config.ini:- [Configurations] format = bash cephfs-filesystem-name = <filesystem-name> rbd-data-pool-name = <pool_name> ... - [Configurations] format = bash cephfs-filesystem-name = <filesystem-name> rbd-data-pool-name = <pool_name> ...- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Set the path to the - config.inifile using- config-file:- python3 ceph-external-cluster-details-exporter.py --config-file /config.ini - # python3 ceph-external-cluster-details-exporter.py --config-file /config.ini- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Command-line flags - Retrieve the external cluster details from the RHCS cluster, and pass the parameters for your deployment. - python3 ceph-external-cluster-details-exporter.py \ --rbd-data-pool-name <rbd block pool name> [optional arguments] - # python3 ceph-external-cluster-details-exporter.py \ --rbd-data-pool-name <rbd block pool name> [optional arguments]- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For example: - python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs - # python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - RBD parameters - rbd-data-pool-name
- A mandatory parameter that is used for providing block storage in OpenShift Data Foundation.
- rados-namespace
- 
															Divides an RBD data pool into separate logical namespaces, used for creating RBD PVC in a radosNamespace. Flags required withrados-namespacearerestricted-auth-permissionandk8s-cluster-name.
- rbd-metadata-ec-pool-name
- (Optional) The name of the erasure coded RBD metadata pool.
 - RGW parameters - rgw-endpoint
- (Optional) This parameter is required only if the object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: - <ip_address>:<port>Note- A fully-qualified domain name (FQDN) is also supported in the format - <FQDN>:<PORT>.
- rgw-pool-prefix
- 
															(Optional) The prefix of the RGW pools. If not specified, the default prefix is default.
- rgw-tls-cert-path
- (Optional) The file path of the RADOS Gateway endpoint TLS certificate. - To provide the TLS certificate and RGW endpoint details to the helper script, - ceph-external-cluster-details-exporter.py, run the following command:- python3 ceph-external-clustergw-endpoint r-details-exporter.py --rbd-data-pool-name <rbd block pool name> --rgw-endpoint <ip_address>:<port> --rgw-tls-cert-path <file path containing cert> - # python3 ceph-external-clustergw-endpoint r-details-exporter.py --rbd-data-pool-name <rbd block pool name> --rgw-endpoint <ip_address>:<port> --rgw-tls-cert-path <file path containing cert>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - This creates a resource to create a Ceph Object Store CR such as Kubernetes secret containing the TLS certificate. All the intermediate certificates including private keys need to be stored in the certificate file. 
 
- rgw-skip-tls
- (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (not recommended).
 - Monitoring parameters - monitoring-endpoint
- 
															(Optional) This parameter accepts comma-separated list of IP addresses of active and standby mgrsreachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.
- monitoring-endpoint-port
- 
															(Optional) It is the port associated with the ceph-mgrPrometheus exporter specified by--monitoring-endpoint. If not provided, the value is automatically populated.
 - Ceph parameters - ceph-conf
- (Optional) The name of the Ceph configuration file.
- run-as-user
- (Optional) This parameter is used for providing name for the Ceph user which is created by the script. If this parameter is not specified, a default user name - client.healthcheckeris created. The permissions for the new user is set as:- caps: [mgr] allow command config
- caps: [mon] allow r, allow command quorum_status, allow command version
- 
																	caps: [osd] allow rwx pool=RGW_POOL_PREFIX.rgw.meta, allow r pool=.rgw.root, allow rw pool=RGW_POOL_PREFIX.rgw.control, allow rx pool=RGW_POOL_PREFIX.rgw.log, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index
 
 - CephFS parameters - cephfs-metadata-pool-name
- (Optional) The name of the CephFS metadata pool.
- cephfs-data-pool-name
- (Optional) The name of the CephFS data pool.
- cephfs-filesystem-name
- (Optional) The name of the CephFS filesystem.
 - Output parameters - dry-run
- (Optional) This parameter helps to print the executed commands without running them.
- output
- (Optional) The file where the output is required to be stored.
 - Multicluster parameters - k8s-cluster-name
- (Optional) Kubernetes cluster name.
- cluster-name
- (Optional) The Ceph cluster name.
- restricted-auth-permission
- 
															(Optional) This parameter restricts cephCSIKeyringsauth permissions to specific pools and clusters. Mandatory flags that need to be set with this arerbd-data-pool-nameandcluster-name. You can also pass thecephfs-filesystem-nameflag if there is CephFS user restriction so that permission is restricted to a particular CephFS filesystem.
 Note- This parameter must be applied only for the new deployments. To restrict - csi-usersper pool and per cluster, you need to create new- csi-usersand new secrets for those- csi-users.- Example with restricted auth permission: - python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true - # python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example of JSON output generated using the python script: - [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]- [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Save the JSON output to a file with - .jsonextensionNote- For OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation. 
- Run the command when there is a multi-tenant deployment in which the RHCS cluster is already connected to OpenShift Data Foundation deployment with a lower version. - python3 ceph-external-cluster-details-exporter.py --upgrade - # python3 ceph-external-cluster-details-exporter.py --upgrade- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Click Browse to select and upload the JSON file. - The content of the JSON file is populated and displayed in the text box. 
- Click Next - The Next button is enabled only after you upload the - .jsonfile.
 
- In the Review and create page, review if all the details are correct: - To modify any configuration settings, click Back to go back to the previous configuration page.
 
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
- In the OpenShift Web Console, navigate to Storage → Data Foundation → Storage System → ocs-external-storagecluster.
- 
						Verify that StatusofStorageClusterisReadyand has a green tick.
- To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation for external Ceph storage system.
2.2.1. Applying encryption in-transit on Red Hat Ceph Storage cluster
Procedure
- Apply Encryption in-transit settings. - root@ceph-client ~]# ceph config set global ms_client_mode secure [root@ceph-client ~]# ceph config set global ms_cluster_mode secure [root@ceph-client ~]# ceph config set global ms_service_mode secure [root@ceph-client ~]# ceph config set global rbd_default_map_options ms_mode=secure - root@ceph-client ~]# ceph config set global ms_client_mode secure [root@ceph-client ~]# ceph config set global ms_cluster_mode secure [root@ceph-client ~]# ceph config set global ms_service_mode secure [root@ceph-client ~]# ceph config set global rbd_default_map_options ms_mode=secure- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check the settings. - ceph config dump | grep ms_ - [root@ceph-client ~]# ceph config dump | grep ms_ global basic ms_client_mode secure * global basic ms_cluster_mode secure * global basic ms_service_mode secure * global advanced rbd_default_map_options ms_mode=secure *- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Restart all Ceph daemons. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Wait for the restarting of all the daemons. 
2.3. Verifying your OpenShift Data Foundation installation for external Ceph storage system
Use this section to verify that OpenShift Data Foundation is deployed correctly.
2.3.1. Verifying the state of the pods
- Click Workloads → Pods from the left pane of the OpenShift Web Console.
- Select - openshift-storagefrom the Project drop-down list.Note- If the Show default projects option is disabled, use the toggle button to list all the default projects. - For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, “Pods corresponding to OpenShift Data Foundation components” 
- Verify that the following pods are in running state: - Expand - Table 2.1. Pods corresponding to OpenShift Data Foundation components - Component - Corresponding pods - OpenShift Data Foundation Operator - 
													ocs-operator-* (1 pod on any worker node)
- 
													ocs-metrics-exporter-* (1 pod on any worker node)
- 
													odf-operator-controller-manager-* (1 pod on any worker node)
- 
													odf-console-* (1 pod on any worker node)
- 
													csi-addons-controller-manager-* (1 pod on any worker node)
 - Rook-ceph Operator - rook-ceph-operator-*- (1 pod on any worker node) - Multicloud Object Gateway - 
													noobaa-operator-* (1 pod on any worker node)
- 
													noobaa-core-* (1 pod on any worker node)
- 
													noobaa-db-pg-cluster-1and noobaa-db-pg-cluster-2 (2 instances of MCG DB pod on any storage node)
- 
													noobaa-endpoint-* (1 pod on any storage node)
- 
													cnpg-controller-manager-* (1 pod on any storage node)
 - CSI - cephfs- 
															openshift-storage.cephfs.csi.ceph.com-ctrlplugin-* (2 pods distributed across storage nodes)
- 
															openshift-storage.cephfs.csi.ceph.com-nodeplugin-* (1 pod on each storage node)
 
- 
															
 Note- If an MDS is not deployed in the external cluster, the - openshift-storage.cephfs.csi.ceph.com-ctrlpluginand- openshift-storage.cephfs.csi.ceph.com-nodeplugin-* pods are not created.- nfs- 
															openshift-storage.nfs.csi.ceph.com-ctrlplugin-* (2 pods distributed across storage nodes)
- 
															openshift-storage.nfs.csi.ceph.com-nodeplugin-* (1 pod on each storage node)
 
- 
															
- rbd- 
															openshift-storage.rbd.csi.ceph.com-ctrlplugin-* (2 pods distributed across storage nodes)
- 
															openshift-storage.rbd.csi.ceph.com-nodeplugin-* (1 pod on each storage node)
 
- 
															
 
- 
													
2.3.2. Verifying that the OpenShift Data Foundation cluster is healthy
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick.
- In the Details card, verify that the cluster information is displayed.
For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
2.3.3. Verifying that the Multicloud Object Gateway is healthy
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. - In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed.
 
The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation.
2.3.4. Verifying that the storage classes are created and listed
- Click Storage → Storage Classes from the left pane of the OpenShift Web Console.
- Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: - 
									ocs-external-storagecluster-ceph-rbd
- 
									ocs-external-storagecluster-ceph-rgw
- 
									ocs-external-storagecluster-cephfs
- 
									openshift-storage.noobaa.io
 
- 
									
- 
								If an MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfsstorage class will not be created.
- 
								If RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgwstorage class will not be created.
For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation
2.3.5. Verifying that Ceph cluster is connected
Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster.
oc get cephcluster -n openshift-storage
$ oc get cephcluster -n openshift-storage
NAME                                      DATADIRHOSTPATH   MONCOUNT   AGE   PHASE       MESSAGE                          HEALTH      EXTERNAL
ocs-external-storagecluster-cephcluster                                30m   Connected   Cluster connected successfully   HEALTH_OK   true2.3.6. Verifying that storage cluster is ready
					Run the following command to verify if the storage cluster is ready and the External option is set to true.
				
oc get storagecluster -n openshift-storage
$ oc get storagecluster -n openshift-storage
NAME                          AGE   PHASE   EXTERNAL   CREATED AT             VERSION
ocs-external-storagecluster   30m   Ready   true       2021-11-17T09:09:52Z   4.19.02.3.7. Verifying the the creation of Ceph Object Store CRD
Run the following command to verify if the Ceph Object Store CRD is created in the external Red Hat Ceph Storage cluster.
oc get cephobjectstore -n openshift-storage"
$ oc get cephobjectstore -n openshift-storage"
NAME               PHASE     ENDPOINT               SECUREENDPOINT      AGE
object-store1      Ready     <http://IP/FQDN:port>                      15mChapter 3. Deploy OpenShift Data Foundation using IBM FlashSystem
OpenShift Data Foundation can use IBM FlashSystem storage available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for IBM FlashSystem storage.
3.1. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
- 
						Access to an OpenShift Container Platform cluster using an account with cluster-adminand operator installation permissions.
- For additional resource requirements, see the Planning your deployment guide.
- When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the - openshift-storagenamespace (create- openshift-storagenamespace in this case):- oc annotate namespace openshift-storage openshift.io/node-selector= - $ oc annotate namespace openshift-storage openshift.io/node-selector=- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
- 
						Scroll or type OpenShift Data Foundationinto the Filter by keyword box to find the OpenShift Data Foundation Operator.
- Click Install.
- Set the following options on the Install Operator page: - Update Channel as stable-4.19.
- Installation Mode as A specific namespace on the cluster.
- 
								Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storagedoes not exist, it is created during the operator installation.
- Select Approval Strategy as Automatic or Manual. - If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. - If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. 
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
 
Verification steps
- 
						After the operator is successfully installed, a pop-up with a message, Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
- In the Web Console: - Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if the Data Foundation dashboard is available.
 
3.2. Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage
You need to create a new OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator on the OpenShift Container Platform.
Prerequisites
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
- For Red Hat Enterprise Linux® operating system, ensure that there is iSCSI connectivity and then configure Linux multipath devices on the host.
- For Red Hat Enterprise Linux CoreOS or when the packages are already installed, configure Linux multipath devices on the host.
- Ensure to configure each worker with storage connectivity according to your storage system instructions. For the latest supported FlashSystem storage systems and versions, see IBM ODF FlashSystem driver documentation.
- Make sure that - ibm-storage-odf-operatoroperator is installed with the following subscription:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation → Storage Systems → Create StorageSystem.
- In the Backing storage page, select the following options: - Select Full deployment for the Deployment type option.
- Select Connect an external storage platform from the available options.
- Select IBM FlashSystem Storage from the Storage platform list.
- Click Next.
 
- In the Create storage class page, provide the following information: - Enter a name for the storage class. - When creating block storage persistent volumes, select the storage class <storage_class_name> for best performance. The storage class allows direct I/O path to the FlashSystem. 
- Enter the following details of IBM FlashSystem connection: - IP address
- User name
- Password
- Pool name
 
- 
								Select thickorthinfor the Volume mode.
- Click Next.
 
- In the Capacity and nodes page, provide the necessary details: - Select a value for Requested capacity. - The available options are - 0.5 TiB,- 2 TiB, and- 4 TiB. The requested capacity is dynamically allocated on the infrastructure storage class.
- Select at least three nodes in three different zones. - It is recommended to start with at least 14 CPUs and 34 GiB of RAM per node. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster will be deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. 
- Click Next.
 
- Optional: In the Security and network page, provide the necessary details: - To enable encryption, select Enable data encryption for block and file storage. - Choose any one or both Encryption level: - Cluster-wide encryption to encrypt the entire cluster (block and file).
- StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class.
 
- Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. - Key Management Service Provider is set to Vault by default.
- Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number, and Token.
 
- Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: - Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Provide CA Certificate, Client Certificate, and Client Private Key by uploading the respective PEM encoded certificate file.
 
- Click Save.
- Select Default (SDN) if you are using a single network or Custom (Multus) if you are using multiple network interfaces. - Select a Public Network Interface from the dropdown.
- 
												Select a Cluster Network Interface from the dropdown. NOTE: If you are using only one additional network interface, select the single NetworkAttachementDefinition, that is,ocs-public-clusterfor the Public Network Interface, and leave the Cluster Network Interface blank.
 
- Click Next.
 
- To enable in-transit encryption, select In-transit encryption. - Select a Network.
- Click Next.
 
 
- In the Review and create page, review if all the details are correct: - To modify any configuration settings, click Back to go back to the previous configuration page.
 
- Click Create StorageSystem.
Verification Steps
- Verifying the state of the pods
- Click Workloads → Pods from the left pane of the OpenShift Web Console.
- Select - openshift-storagefrom the Project drop-down list.Note- If the Show default projects option is disabled, use the toggle button to list all the default projects. - Expand - Table 3.1. Pods corresponding to OpenShift Data Foundation components - Component - Corresponding pods - OpenShift Data Foundation Operator - 
															ocs-operator-* (1 pod on any worker node)
- 
															ocs-metrics-exporter-* (1 pod on any worker node)
- 
															odf-operator-controller-manager-* (1 pod on any worker node)
- 
															odf-console-* (1 pod on any worker node)
- 
															csi-addons-controller-manager-* (1 pod on any worker node)
 - ibm-storage-odf-operator- 
															ibm-storage-odf-operator-* (2 pods on any worker nodes)
- 
															ibm-odf-console-*
 - ibm-flashsystem-storage- ibm-flashsystem-storage-* (1 pod on any worker node)- rook-ceph Operator- rook-ceph-operator-* (1 pod on any worker node)- Multicloud Object Gateway - 
															noobaa-operator-* (1 pod on any worker node)
- 
															noobaa-core-* (1 pod on any worker node)
- 
															noobaa-db-pg-cluster-1and noobaa-db-pg-cluster-2 (2 instances of MCG DB pod on any storage node)
- 
															noobaa-endpoint-* (1 pod on any storage node)
- 
															cnpg-controller-manager-* (1 pod on any storage node)
 - CSI - 
															ibm-block-csi-* (1 pod on any worker node)
 
- 
															
 
- Verifying that the OpenShift Data Foundation cluster is healthy
- In the Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, verify that the Storage System has a green tick mark.
- In the Details card, verify that the cluster information is displayed.
 
For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
- Verifying that the Multicloud Object Gateway is healthy
- In the Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
 
For more information on the health of the OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation.
- Verifying that IBM FlashSystem is connected and the storage cluster is ready
- Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external IBM FlashSystem.
 - oc get flashsystemclusters.odf.ibm.com - $ oc get flashsystemclusters.odf.ibm.com NAME AGE PHASE CREATED AT ibm-flashsystem-storage-fab3p-60 46h Ready 2025-06-30T12:17:06Z- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verifying the StorageSystem of the storage
- Run the following command to verify the storageSystem of IBM FlashSystem storage cluster.
 - oc get storagesystems.odf.openshift.io - $ oc get storagesystems.odf.openshift.io NAME STORAGE-SYSTEM-KIND STORAGE-SYSTEM-NAME ibm-flashsystemcluster-storagesystem flashsystemcluster.odf.ibm.com/v1alpha1 ibm-flashsystemcluster ocs-storagecluster storagecluster.ocs.openshift.io/v1 ocs-storagecluster- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verifying the subscription of the IBM operator
- Run the following command to verify the subscription:
 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verifying the CSVs
- Run the following command to verify that the CSVs are in the succeeded state.
 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verifying the IBM operator and CSI pods
- Run the following command to verify the IBM operator and CSI pods:
 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Chapter 4. Uninstalling OpenShift Data Foundation from external storage system
Use the steps in this section to uninstall OpenShift Data Foundation. Uninstalling OpenShift Data Foundation does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster.
Uninstall Annotations
Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster:
- 
					uninstall.ocs.openshift.io/cleanup-policy: delete
- 
					uninstall.ocs.openshift.io/mode: graceful
				The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode.
			
The below table provides information on the different values that can used with these annotations:
| Annotation | Value | Default | Behavior | 
|---|---|---|---|
| cleanup-policy | delete | Yes | 
							Rook cleans up the physical drives and the  | 
| cleanup-policy | retain | No | 
							Rook does not clean up the physical drives and the  | 
| mode | graceful | Yes | Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user | 
| mode | forced | No | Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively | 
You can change the uninstall mode by editing the value of the annotation by using the following commands:
oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode="forced" --overwrite
$ oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode="forced" --overwrite
storagecluster.ocs.openshift.io/ocs-external-storagecluster annotatedPrerequisites
- Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation.
- Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation.
Procedure
- Delete the volume snapshots that are using OpenShift Data Foundation. - List the volume snapshots from all the namespaces - oc get volumesnapshot --all-namespaces - $ oc get volumesnapshot --all-namespaces- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- From the output of the previous command, identify and delete the volume snapshots that are using OpenShift Data Foundation. - oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE> - $ oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Delete PVCs and OBCs that are using OpenShift Data Foundation. - In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. - If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system. - Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. - See Removing monitoring stack from OpenShift Data Foundation 
- Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. - Removing OpenShift Container Platform registry from OpenShift Data Foundation 
- Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. - Removing the cluster logging operator from OpenShift Data Foundation 
- Delete other PVCs and OBCs provisioned using OpenShift Data Foundation. - Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs and OBCs that are used internally by OpenShift Data Foundation. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Delete the OBCs. - oc delete obc <obc name> -n <project name> - $ oc delete obc <obc name> -n <project name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Delete the PVCs. - oc delete pvc <pvc name> -n <project-name> - $ oc delete pvc <pvc name> -n <project-name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Ensure that you have removed any custom backing stores, bucket classes, and so on that are created in the cluster. 
 
 
- Delete the Storage Cluster object and wait for the removal of the associated resources. - oc delete -n openshift-storage storagesystem --all --wait=true - $ oc delete -n openshift-storage storagesystem --all --wait=true- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Delete the namespace and wait until the deletion is complete. You will need to switch to another project if - openshift-storageis the active project.- For example: - oc project default oc delete project openshift-storage --wait=true --timeout=5m - $ oc project default $ oc delete project openshift-storage --wait=true --timeout=5m- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The project is deleted if the following command returns a - NotFounderror.- oc get project openshift-storage - $ oc get project openshift-storage- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- While uninstalling OpenShift Data Foundation, if the namespace is not deleted completely and remains in - Terminatingstate, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.
- Confirm all PVs provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the - Releasedstate, delete it.- oc get pv oc delete pv <pv name> - $ oc get pv $ oc delete pv <pv name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Remove - CustomResourceDefinitions.- oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m - $ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To ensure that OpenShift Data Foundation is uninstalled completely: - In the OpenShift Container Platform Web Console, click Storage.
- Verify that OpenShift Data Foundation no longer appears under Storage.
 
4.1. Removing monitoring stack from OpenShift Data Foundation
Use this section to clean up the monitoring stack from OpenShift Data Foundation.
				The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.
			
Prerequisites
- PVCs are configured to use the OpenShift Container Platform monitoring stack. - For information, see configuring monitoring stack. 
Procedure
- List the pods and PVCs that are currently running in the - openshift-monitoringnamespace.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Edit the monitoring - configmap.- oc -n openshift-monitoring edit configmap cluster-monitoring-config - $ oc -n openshift-monitoring edit configmap cluster-monitoring-config- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Remove any - configsections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it.- Expand - Before editing - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - After editing - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - In this example, - alertmanagerMainand- prometheusK8smonitoring components are using the OpenShift Data Foundation PVCs.
- List the pods consuming the PVC. - In this example, the - alertmanagerMainand- prometheusK8spods that were consuming the PVCs are in the- Terminatingstate. You can delete the PVCs once these pods are no longer using OpenShift Data Foundation PVC.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. - oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m - $ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
4.2. Removing OpenShift Container Platform registry from OpenShift Data Foundation
Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see image registry
				The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace.
			
Prerequisites
- The image registry should have been configured to use an OpenShift Data Foundation PVC.
Procedure
- Edit the - configs.imageregistry.operator.openshift.ioobject and remove the content in the storage section.- oc edit configs.imageregistry.operator.openshift.io - $ oc edit configs.imageregistry.operator.openshift.io- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Expand - Before editing - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - After editing - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - In this example, the PVC is called - registry-cephfs-rwx-pvc, which is now safe to delete.
- Delete the PVC. - oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m - $ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
4.3. Removing the cluster logging operator from OpenShift Data Foundation
Use this section to clean up the cluster logging operator from OpenShift Data Foundation.
				The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace.
			
Prerequisites
- The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs.
Procedure
- Remove the - ClusterLogginginstance in the namespace.- oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m - $ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The PVCs in the - openshift-loggingnamespace are now safe to delete.
- Delete the PVCs. - oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m - $ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - <pvc-name>
- Is the name of the PVC
 
4.4. Removing external IBM FlashSystem secret
You need to clean up the FlashSystem secret from OpenShift Data Foundation while uninstalling. This secret is created when you configure the external IBM FlashSystem Storage. For more information, see Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage.
Procedure
- Remove the IBM FlashSystem secret by using the following command: - oc delete secret -n openshift-storage ibm-flashsystem-storage - $ oc delete secret -n openshift-storage ibm-flashsystem-storage- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow