Deploying OpenShift Data Foundation in external mode
Instructions for deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem.
Abstract
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Bugzilla ticket:
- Go to the Bugzilla website.
- In the Component section, choose documentation.
- Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
- Click Submit Bug.
Chapter 1. Overview of deploying in external mode Copy linkLink copied to clipboard!
Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster or use IBM FlashSystems available for consumption through OpenShift Container Platform clusters running on the following platforms:
- VMware vSphere
- Bare metal
- Red Hat OpenStack platform (Technology Preview)
- IBM Power
- IBM Z infrastructure
See Planning your deployment for more information.
For instructions regarding how to install a RHCS cluster, see the installation guide.
Follow these steps to deploy OpenShift Data Foundation in external mode:
Deploy the following:
Disaster recovery requirements [Technology Preview]
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution:
- A valid Red Hat OpenShift Data Foundation Advanced subscription
- A valid Red Hat Advanced Cluster Management for Kubernetes subscription
For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage Copy linkLink copied to clipboard!
Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create OpenShift Data Foundation cluster for external Ceph storage system.
2.1. Installing Red Hat OpenShift Data Foundation Operator Copy linkLink copied to clipboard!
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminand operator installation permissions. - For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the
openshift-storagenamespace (createopenshift-storagenamespace in this case):oc annotate namespace openshift-storage openshift.io/node-selector=
$ oc annotate namespace openshift-storage openshift.io/node-selector=Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundationinto the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.12.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storagedoes not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
-
After the operator is successfully installed, a pop-up with a message,
Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if Data Foundation dashboard is available.
2.2. Creating an OpenShift Data Foundation Cluster for external Ceph storage system Copy linkLink copied to clipboard!
You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on VMware vSphere or user-provisioned bare metal infrastructures.
Prerequisites
- A valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
- Ensure the OpenShift Container Platform version is 4.12 or above before deploying OpenShift Data Foundation 4.12.
- OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub.
To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker.
-
Select Service Type as
ODF as Self-Managed Service. - Select appropriate Version from the drop down.
- On Versions tab, click Supported RHCS versions in the External Mode tab.
-
Select Service Type as
If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS PVC creation in external mode.
For more details, see Troubleshooting CephFS PVC creation in external mode.
- Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access.
- It is recommended that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation.
- The external Ceph cluster should have an existing RBD pool pre-configured for use. If it does not exist, contact your Red Hat Ceph Storage administrator to create one before you move ahead with OpenShift Data Foundation deployment. Red Hat recommends to use a separate pool for each OpenShift Data Foundation cluster.
-
Optional: If there is a zonegroup created apart from the default zonegroup, you need to add the hostname,
rook-ceph-rgw-ocs-external-storagecluster-cephobjectstore.openshift-storage.svcto the zonegroup as OpenShift Data Foundation sends S3 requests to the RADOS Object Gateways (RGWs) with this hostname. For more information, see the Red Hat Knowledgebase solution Ceph - How to add hostnames in RGW zonegroup?.
Procedure
Click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage.- Click OpenShift Data Foundation and then click Create StorageSystem.
In the Backing storage page, select the following options:
- Select Full deployment for the Deployment type option.
- Select Connect an external storage platform from the available options.
- Select Red Hat Ceph Storage for Storage platform.
- Click Next.
In the Connection details page, provide the necessary information:
- Click on the Download Script link to download the python script for extracting Ceph cluster details.
For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with the
admin key.Run the following command on the RHCS node to view the list of available arguments:
python3 ceph-external-cluster-details-exporter.py --help
# python3 ceph-external-cluster-details-exporter.py --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantUse
pythoninstead ofpython3if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster.You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment).
NoteUse the
yum install cephadmcommand and then thecephadmcommand to deploy your RHCS cluster using containers. You must pull the RHCS container images using thecephadmcommand, rather than usingyumfor installing the Ceph packages onto nodes. For more information, see RHCS product documentation.To retrieve the external cluster details from the RHCS cluster, run the following command:
python3 ceph-external-cluster-details-exporter.py \ --rbd-data-pool-name <rbd block pool name> [optional arguments]
# python3 ceph-external-cluster-details-exporter.py \ --rbd-data-pool-name <rbd block pool name> [optional arguments]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocsCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
rbd-data-pool-name
A mandatory parameter that is used for providing block storage in OpenShift Data Foundation.
rgw-endpoint
(Optional) This parameter is required only if the object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format:
<ip_address>:<port>NoteA fully-qualified domain name (FQDN) is also supported in the format
<FQDN>:<PORT>.monitoring-endpoint
(Optional) This parameter accepts comma-separated list of IP addresses of active and standby
mgrsreachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.monitoring-endpoint-port
(Optional) It is the port associated with the
ceph-mgrPrometheus exporter specified by--monitoring-endpoint. If not provided, the value is automatically populated.run-as-user
(Optional) This parameter is used for providing name for the Ceph user which is created by the script. If this parameter is not specified, a default user name
client.healthcheckeris created. The permissions for the new user is set as:- caps: [mgr] allow command config
- caps: [mon] allow r, allow command quorum_status, allow command version
-
caps: [osd] allow rwx pool=
RGW_POOL_PREFIX.rgw.meta, allow r pool=.rgw.root, allow rw pool=RGW_POOL_PREFIX.rgw.control, allow rx pool=RGW_POOL_PREFIX.rgw.log, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index
Additional flags:
rgw-pool-prefix
(Optional) The prefix of the RGW pools. If not specified, the default prefix is
default.rgw-tls-cert-path
(Optional) The file path of the RADOS Gateway endpoint TLS certificate.
rgw-skip-tls
(Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED).
ceph-conf
(Optional) The name of the Ceph configuration file.
cluster-name
(Optional) The Ceph cluster name.
output
(Optional) The file where the output is required to be stored.
cephfs-metadata-pool-name
(Optional) The name of the CephFS meta data pool.
cephfs-data-pool-name
(Optional) The name of the CephFS data pool.
cephfs-filesystem-name
(Optional) The name of the CephFS filesystem.
rbd-metadata-ec-pool-name
(Optional) The name of erasure coded RBD metadata pool.
dry-run
(Optional) This parameter helps to print the executed commands without running them.
restricted-auth-permission
(Optional) This parameter restricts
cephCSIKeyringsauth permissions to specific pools and cluster. Mandatory flags that need to be set with this arerbd-data-pool-nameandcluster-name. You can also pass thecephfs-filesystem-nameflag if there is CephFS user restriction so that permission is restricted to a particular CephFS filesystem.NoteThis parameter must be applied only for the new deployments. To restrict
csi-usersper pool and per cluster, you need to create newcsi-usersand new secrets for thosecsi-users.Example with restricted auth permission:
python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true
# python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example of JSON output generated using the python script:
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}][{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the JSON output to a file with
.jsonextensionNoteFor OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remains unchanged on the RHCS external cluster after the storage cluster creation.
Run the command when there is a multi-tenant deployment in which RHCS cluster is already connected to OpenShift Data Foundation deployment with a lower version.
python3 ceph-external-cluster-details-exporter.py --upgrade
# python3 ceph-external-cluster-details-exporter.py --upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Click Browse to select and upload the JSON file.
The content of the JSON file is populated and displayed in the text box.
Click Next
The Next button is enabled only after you upload the
.jsonfile.
In the Review and create page, review if all the details are correct:
- To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
- In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-external-storagecluster-storagesystem → Resources.
-
Verify that
StatusofStorageClusterisReadyand has a green tick. - To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation for external Ceph storage system.
2.3. Verifying your OpenShift Data Foundation installation for external Ceph storage system Copy linkLink copied to clipboard!
Use this section to verify that OpenShift Data Foundation is deployed correctly.
2.3.1. Verifying the state of the pods Copy linkLink copied to clipboard!
- Click Workloads → Pods from the left pane of the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, “Pods corresponding to OpenShift Data Foundation components”
Verify that the following pods are in running state:
Expand Table 2.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*(1 pod on any worker node) -
ocs-metrics-exporter-*(1 pod on any worker node) -
odf-operator-controller-manager-*(1 pod on any worker node) -
odf-console-*(1 pod on any worker node) -
csi-addons-controller-manager-*(1 pod on any worker node)
Rook-ceph Operator
rook-ceph-operator-*(1 pod on any worker node)
Multicloud Object Gateway
-
noobaa-operator-*(1 pod on any worker node) -
noobaa-core-*(1 pod on any worker node) -
noobaa-db-pg-*(1 pod on any worker node) -
noobaa-endpoint-*(1 pod on any worker node)
CSI
cephfs-
csi-cephfsplugin-*(1 pod on each worker node) -
csi-cephfsplugin-provisioner-*(2 pods distributed across worker nodes)
-
NoteIf an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created.
rbd-
csi-rbdplugin-*(1 pod on each worker node) -
csi-rbdplugin-provisioner-*(2 pods distributed across worker nodes)
-
-
2.3.2. Verifying that the OpenShift Data Foundation cluster is healthy Copy linkLink copied to clipboard!
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Block and File tab, verify that Storage Cluster has a green tick.
- In the Details card, verify that the cluster information is displayed.
For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
2.3.3. Verifying that the Multicloud Object Gateway is healthy Copy linkLink copied to clipboard!
- In the OpenShift Web Console, click Storage → Data Foundation.
In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed.
The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation.
2.3.4. Verifying that the storage classes are created and listed Copy linkLink copied to clipboard!
- Click Storage → Storage Classes from the left pane of the OpenShift Web Console.
Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:
-
ocs-external-storagecluster-ceph-rbd -
ocs-external-storagecluster-ceph-rgw -
ocs-external-storagecluster-cephfs -
openshift-storage.noobaa.io
-
-
If an MDS is not deployed in the external cluster,
ocs-external-storagecluster-cephfsstorage class will not be created. -
If RGW is not deployed in the external cluster, the
ocs-external-storagecluster-ceph-rgwstorage class will not be created.
For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation
2.3.5. Verifying that Ceph cluster is connected Copy linkLink copied to clipboard!
Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster.
oc get cephcluster -n openshift-storage
$ oc get cephcluster -n openshift-storage
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL
ocs-external-storagecluster-cephcluster 30m Connected Cluster connected successfully HEALTH_OK true
2.3.6. Verifying that storage cluster is ready Copy linkLink copied to clipboard!
Run the following command to verify if the storage cluster is ready and the External option is set to true.
oc get storagecluster -n openshift-storage
$ oc get storagecluster -n openshift-storage
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-external-storagecluster 30m Ready true 2021-11-17T09:09:52Z 4.12.0
Chapter 3. Deploy OpenShift Data Foundation using IBM FlashSystem Copy linkLink copied to clipboard!
OpenShift Data Foundation can use IBM FlashSystem storage available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for IBM FlashSystem storage.
3.1. Installing Red Hat OpenShift Data Foundation Operator Copy linkLink copied to clipboard!
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminand operator installation permissions. - For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the
openshift-storagenamespace (createopenshift-storagenamespace in this case):oc annotate namespace openshift-storage openshift.io/node-selector=
$ oc annotate namespace openshift-storage openshift.io/node-selector=Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundationinto the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.12.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storagedoes not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
-
After the operator is successfully installed, a pop-up with a message,
Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if Data Foundation dashboard is available.
3.2. Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage Copy linkLink copied to clipboard!
You need to create a new OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator on the OpenShift Container Platform.
Prerequisites
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
- For Red Hat Enterprise Linux® operating system, ensure that there is iSCSI connectivity and then configure Linux multipath devices on the host.
- For Red Hat Enterprise Linux CoreOS or when the packages are already installed, configure Linux multipath devices on the host.
- Ensure to configure each worker with storage connectivity according to your storage system instructions. For the latest supported FlashSystem storage systems and versions, see ODF FlashSystem driver documentation.
Procedure
In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage.- Click OpenShift Data Foundation and then click Create StorageSystem.
In the Backing storage page, select the following options:
- Select Full deployment for the Deployment type option.
- Select Connect an external storage platform from the available options.
- Select IBM FlashSystem Storage from the Storage platform list.
- Click Next.
In the Create storage class page, provide the following information:
Enter a name for the storage class.
When creating block storage persistent volumes, select the storage class <storage_class_name> for best performance. The storage class allows direct I/O path to the FlashSystem.
Enter the following details of IBM FlashSystem connection:
- IP address
- User name
- Password
- Pool name
-
Select
thickorthinfor the Volume mode. - Click Next.
In the Capacity and nodes page, provide the necessary details:
Select a value for Requested capacity.
The available options are
0.5 TiB,2 TiB, and4 TiB. The requested capacity is dynamically allocated on the infrastructure storage class.Select at least three nodes in three different zones.
It is recommended to start with at least 14 CPUs and 34 GiB of RAM per node. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster will be deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide.
- Click Next.
Optional: In the Security and network page, provide the necessary details:
- To enable encryption, select Enable data encryption for block and file storage.
Choose any one or both Encryption level:
- Cluster-wide encryption to encrypt the entire cluster (block and file).
- StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class.
Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
- Key Management Service Provider is set to Vault by default.
- Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number, and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:
- Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Provide CA Certificate, Client Certificate, and Client Private Key by uploading the respective PEM encoded certificate file.
- Click Save.
Select Default (SDN) if you are using a single network or Custom (Multus) if you are using multiple network interfaces.
- Select a Public Network Interface from the dropdown.
-
Select a Cluster Network Interface from the dropdown. NOTE: If you are using only one additional network interface, select the single
NetworkAttachementDefinition, that is,ocs-public-clusterfor the Public Network Interface, and leave the Cluster Network Interface blank.
- Click Next.
In the Review and create page, review if all the details are correct:
- To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
Verification Steps
- Verifying the state of the pods
- Click Workloads → Pods from the left pane of the OpenShift Web Console.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Expand Table 3.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*(1 pod on any worker node) -
ocs-metrics-exporter-*(1 pod on any worker node) -
odf-operator-controller-manager-*(1 pod on any worker node) -
odf-console-*(1 pod on any worker node) -
csi-addons-controller-manager-*(1 pod on any worker node)
ibm-storage-odf-operator-
ibm-storage-odf-operator-*(2 pods on any worker nodes) -
ibm-odf-console-*
ibm-flashsystem-storageibm-flashsystem-storage-*(1 pod on any worker node)rook-ceph Operatorrook-ceph-operator-*(1 pod on any worker node)Multicloud Object Gateway
-
noobaa-operator-*(1 pod on any worker node) -
noobaa-core-*(1 pod on any worker node) -
noobaa-db-pg-*(1 pod on any worker node) -
noobaa-endpoint-*(1 pod on any worker node)
CSI
-
ibm-block-csi-*(1 pod on any worker node)
-
- Verifying that the OpenShift Data Foundation cluster is healthy
- In the Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, verify that Storage System has a green tick mark.
- In the Details card, verify that the cluster information is displayed.
For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
- Verfifying that the Multicloud Object Gateway is healthy
- In the Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation.
- Verifying that IBM FlashSystem is connected and the storage cluster is ready
- Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external IBM FlashSystem.
oc get flashsystemclusters.odf.ibm.com
$ oc get flashsystemclusters.odf.ibm.com NAME AGE PHASE CREATED AT ibm-flashsystemcluster 35s 2021-09-23T07:44:52ZCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verifying the StorageSystem of the storage
- Run the following command to verify the storageSystem of IBM FlashSystem storage cluster.
oc get storagesystems.odf.openshift.io
$ oc get storagesystems.odf.openshift.io NAME STORAGE-SYSTEM-KIND STORAGE-SYSTEM-NAME ibm-flashsystemcluster-storagesystem flashsystemcluster.odf.ibm.com/v1alpha1 ibm-flashsystemcluster ocs-storagecluster-storagesystem storagecluster.ocs.openshift.io/v1 ocs-storageclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verifying the subscription of the IBM operator
- Run the following command to verify the subscription:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verifying the CSVs
- Run the following command to verify that the CSVs are in the succeeded state.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verifying the IBM operator and CSI pods
- Run the following command to verify the IBM operator and CSI pods:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Uninstalling OpenShift Data Foundation from external storage system Copy linkLink copied to clipboard!
Use the steps in this section to uninstall OpenShift Data Foundation. Uninstalling OpenShift Data Foundation does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster.
Uninstall Annotations
Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster:
-
uninstall.ocs.openshift.io/cleanup-policy: delete -
uninstall.ocs.openshift.io/mode: graceful
The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode.
The below table provides information on the different values that can used with these annotations:
| Annotation | Value | Default | Behavior |
|---|---|---|---|
| cleanup-policy | delete | Yes |
Rook cleans up the physical drives and the |
| cleanup-policy | retain | No |
Rook does not clean up the physical drives and the |
| mode | graceful | Yes | Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user |
| mode | forced | No | Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively |
You can change the uninstall mode by editing the value of the annotation by using the following commands:
oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode="forced" --overwrite
$ oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode="forced" --overwrite
storagecluster.ocs.openshift.io/ocs-external-storagecluster annotated
Prerequisites
- Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation.
- Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation.
Procedure
Delete the volume snapshots that are using OpenShift Data Foundation.
List the volume snapshots from all the namespaces
oc get volumesnapshot --all-namespaces
$ oc get volumesnapshot --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the output of the previous command, identify and delete the volume snapshots that are using OpenShift Data Foundation.
oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>
$ oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete PVCs and OBCs that are using OpenShift Data Foundation.
In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted.
If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system.
Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation.
See Removing monitoring stack from OpenShift Data Foundation
Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation.
Removing OpenShift Container Platform registry from OpenShift Data Foundation
Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation.
Removing the cluster logging operator from OpenShift Data Foundation
Delete other PVCs and OBCs provisioned using OpenShift Data Foundation.
Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs and OBCs that are used internally by OpenShift Data Foundation.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the OBCs.
oc delete obc <obc name> -n <project name>
$ oc delete obc <obc name> -n <project name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PVCs.
oc delete pvc <pvc name> -n <project-name>
$ oc delete pvc <pvc name> -n <project-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that you have removed any custom backing stores, bucket classes, and so on that are created in the cluster.
Delete the Storage Cluster object and wait for the removal of the associated resources.
oc delete -n openshift-storage storagesystem --all --wait=true
$ oc delete -n openshift-storage storagesystem --all --wait=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the namespace and wait until the deletion is complete. You will need to switch to another project if
openshift-storageis the active project.For example:
oc project default oc delete project openshift-storage --wait=true --timeout=5m
$ oc project default $ oc delete project openshift-storage --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow The project is deleted if the following command returns a
NotFounderror.oc get project openshift-storage
$ oc get project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhile uninstalling OpenShift Data Foundation, if the namespace is not deleted completely and remains in
Terminatingstate, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.Confirm all PVs provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the
Releasedstate, delete it.oc get pv oc delete pv <pv name>
$ oc get pv $ oc delete pv <pv name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove
CustomResourceDefinitions.oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m
$ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure that OpenShift Data Foundation is uninstalled completely:
- In the OpenShift Container Platform Web Console, click Storage.
- Verify that OpenShift Data Foundation no longer appears under Storage.
4.1. Removing monitoring stack from OpenShift Data Foundation Copy linkLink copied to clipboard!
Use this section to clean up the monitoring stack from OpenShift Data Foundation.
The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.
Prerequisites
PVCs are configured to use OpenShift Container Platform monitoring stack.
For information, see configuring monitoring stack.
Procedure
List the pods and PVCs that are currently running in the
openshift-monitoringnamespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the monitoring
configmap.oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove any
configsections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it.Expand Before editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
alertmanagerMainandprometheusK8smonitoring components are using the OpenShift Data Foundation PVCs.List the pods consuming the PVC.
In this example, the
alertmanagerMainandprometheusK8spods that were consuming the PVCs are in theTerminatingstate. You can delete the PVCs once these pods are no longer using OpenShift Data Foundation PVC.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.
oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m
$ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Removing OpenShift Container Platform registry from OpenShift Data Foundation Copy linkLink copied to clipboard!
Use this section to clean up OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see image registry
The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace.
Prerequisites
- The image registry should have been configured to use an OpenShift Data Foundation PVC.
Procedure
Edit the
configs.imageregistry.operator.openshift.ioobject and remove the content in the storage section.oc edit configs.imageregistry.operator.openshift.io
$ oc edit configs.imageregistry.operator.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Before editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the PVC is called
registry-cephfs-rwx-pvc, which is now safe to delete.Delete the PVC.
oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Removing the cluster logging operator from OpenShift Data Foundation Copy linkLink copied to clipboard!
Use this section to clean up the cluster logging operator from OpenShift Data Foundation.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace.
Prerequisites
- The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs.
Procedure
Remove the
ClusterLogginginstance in the namespace.oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m
$ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow The PVCs in the
openshift-loggingnamespace are now safe to delete.Delete the PVCs.
oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow <pvc-name>- Is the name of the PVC
4.4. Removing external IBM FlashSystem secret Copy linkLink copied to clipboard!
You need to clean up the FlashSystem secret from OpenShift Data Foundation while uninstalling. This secret is created when you configure the external IBM FlashSystem Storage. See Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage.
Procedure
Remove the IBM FlashSystem secret by using the following command:
oc delete secret -n openshift-storage ibm-flashsystem-storage
$ oc delete secret -n openshift-storage ibm-flashsystem-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow