Deploying OpenShift Data Foundation in external mode
Instructions for deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem.
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Jira ticket:
- Log in to the Jira.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Select Documentation in the Components field.
- Click Create at the bottom of the dialogue.
Chapter 1. Overview of deploying in external mode
Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on any platform.
See Planning your deployment for more information.
For instructions regarding how to install a RHCS cluster, see the installation guide.
Follow these steps to deploy OpenShift Data Foundation in external mode:
1.1. Disaster recovery requirements
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution:
- A valid Red Hat OpenShift Data Foundation Advanced subscription
- A valid Red Hat Advanced Cluster Management for Kubernetes subscription
For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
1.2. Network ports required between OpenShift Container Platform and Ceph when using external mode deployment
List of TCP ports, source OpenShift Container Platform and destination RHCS
TCP ports | To be used for |
---|---|
6789, 3300 | Ceph Monitor |
6800 - 7300 | Ceph OSD, MGR, MDS |
9283 | Ceph MGR Prometheus Exporter |
For more information about why these ports are required, see Chapter 2. Ceph network configuration of RHCS Configuration Guide.
Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage
Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for the external Ceph storage system.
2.1. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and operator installation permissions. - For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the
openshift-storage
namespace (createopenshift-storage
namespace in this case):$ oc annotate namespace openshift-storage openshift.io/node-selector=
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundation
into the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.18.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
-
After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if the Data Foundation dashboard is available.
2.2. Creating an OpenShift Data Foundation Cluster for external Ceph storage system
You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on VMware vSphere or user-provisioned bare metal infrastructures.
Prerequisites
- A valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
- Ensure the OpenShift Container Platform version is 4.18 or above before deploying OpenShift Data Foundation 4.18.
- OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub.
To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker.
-
Select Service Type as
ODF as Self-Managed Service
. - Select appropriate Version from the drop down.
- On the Versions tab, click the Supported RHCS versions in the External Mode tab.
-
Select Service Type as
- Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access.
- It is recommended that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation.
Procedure
Click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage
.- Click OpenShift Data Foundation and then click Create StorageSystem.
In the Backing storage page, select the following options:
- Select Full deployment for the Deployment type option.
- Select Connect an external storage platform from the available options.
- Select Red Hat Ceph Storage for Storage platform.
- Click Next.
In the Connection details page, provide the necessary information:
- Click on the Download Script link to download the python script for extracting Ceph cluster details.
For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with the
admin key
.Run the following command on the RHCS node to view the list of available arguments:
# python3 ceph-external-cluster-details-exporter.py --help
ImportantUse
python
instead ofpython3
if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster.You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment).
NoteUse the
yum install cephadm
command and then thecephadm
command to deploy your RHCS cluster using containers. You must pull the RHCS container images using thecephadm
command, rather than usingyum
for installing the Ceph packages onto nodes. For more information, see RHCS product documentation.To retrieve the external cluster details from the RHCS cluster, choose one of the following two options, either a configuration file or command-line flags.
Configuration file
Use
config-file
flag. This stores the parameters used during deployment.In new deployments, you can save the parameters used during deployment in a configuration file. This file can then be used during upgrade to preserve the parameters as well as adding any additional parameters. Use
config-file
to set the path to the configuration file.Example configuration file saved in
/config.ini
:[Configurations] format = bash cephfs-filesystem-name = <filesystem-name> rbd-data-pool-name = <pool_name> ...
Set the path to the
config.ini
file usingconfig-file
:# python3 ceph-external-cluster-details-exporter.py --config-file /config.ini
Command-line flags
Retrieve the external cluster details from the RHCS cluster, and pass the parameters for your deployment.
# python3 ceph-external-cluster-details-exporter.py \ --rbd-data-pool-name <rbd block pool name> [optional arguments]
For example:
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs
RBD parameters
- rbd-data-pool-name
- A mandatory parameter that is used for providing block storage in OpenShift Data Foundation.
- rados-namespace
-
Divides an RBD data pool into separate logical namespaces, used for creating RBD PVC in a
radosNamespace
. Flags required withrados-namespace
arerestricted-auth-permission
andk8s-cluster-name
. - rbd-metadata-ec-pool-name
- (Optional) The name of the erasure coded RBD metadata pool.
RGW parameters
- rgw-endpoint
(Optional) This parameter is required only if the object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format:
<ip_address>:<port>
NoteA fully-qualified domain name (FQDN) is also supported in the format
<FQDN>:<PORT>
.- rgw-pool-prefix
-
(Optional) The prefix of the RGW pools. If not specified, the default prefix is
default
. - rgw-tls-cert-path
(Optional) The file path of the RADOS Gateway endpoint TLS certificate.
To provide the TLS certificate and RGW endpoint details to the helper script,
ceph-external-cluster-details-exporter.py
, run the following command:# python3 ceph-external-clustergw-endpoint r-details-exporter.py --rbd-data-pool-name <rbd block pool name> --rgw-endpoint <ip_address>:<port> --rgw-tls-cert-path <file path containing cert>
This creates a resource to create a Ceph Object Store CR such as Kubernetes secret containing the TLS certificate. All the intermediate certificates including private keys need to be stored in the certificate file.
- rgw-skip-tls
- (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (not recommended).
Monitoring parameters
- monitoring-endpoint
-
(Optional) This parameter accepts comma-separated list of IP addresses of active and standby
mgrs
reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. - monitoring-endpoint-port
-
(Optional) It is the port associated with the
ceph-mgr
Prometheus exporter specified by--monitoring-endpoint
. If not provided, the value is automatically populated.
Ceph parameters
- ceph-conf
- (Optional) The name of the Ceph configuration file.
- run-as-user
(Optional) This parameter is used for providing name for the Ceph user which is created by the script. If this parameter is not specified, a default user name
client.healthchecker
is created. The permissions for the new user is set as:- caps: [mgr] allow command config
- caps: [mon] allow r, allow command quorum_status, allow command version
-
caps: [osd] allow rwx pool=
RGW_POOL_PREFIX.rgw.meta
, allow r pool=.rgw.root
, allow rw pool=RGW_POOL_PREFIX.rgw.control
, allow rx pool=RGW_POOL_PREFIX.rgw.log
, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index
CephFS parameters
- cephfs-metadata-pool-name
- (Optional) The name of the CephFS metadata pool.
- cephfs-data-pool-name
- (Optional) The name of the CephFS data pool.
- cephfs-filesystem-name
- (Optional) The name of the CephFS filesystem.
Output parameters
- dry-run
- (Optional) This parameter helps to print the executed commands without running them.
- output
- (Optional) The file where the output is required to be stored.
Multicluster parameters
- k8s-cluster-name
- (Optional) Kubernetes cluster name.
- cluster-name
- (Optional) The Ceph cluster name.
- restricted-auth-permission
-
(Optional) This parameter restricts
cephCSIKeyrings
auth permissions to specific pools and clusters. Mandatory flags that need to be set with this arerbd-data-pool-name
andcluster-name
. You can also pass thecephfs-filesystem-name
flag if there is CephFS user restriction so that permission is restricted to a particular CephFS filesystem.
NoteThis parameter must be applied only for the new deployments. To restrict
csi-users
per pool and per cluster, you need to create newcsi-users
and new secrets for thosecsi-users
.Example with restricted auth permission:
# python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true
Example of JSON output generated using the python script:
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
Save the JSON output to a file with
.json
extensionNoteFor OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation.
Run the command when there is a multi-tenant deployment in which the RHCS cluster is already connected to OpenShift Data Foundation deployment with a lower version.
# python3 ceph-external-cluster-details-exporter.py --upgrade
Click Browse to select and upload the JSON file.
The content of the JSON file is populated and displayed in the text box.
Click Next
The Next button is enabled only after you upload the
.json
file.
In the Review and create page, review if all the details are correct:
- To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
- In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-external-storagecluster-storagesystem → Resources.
-
Verify that
Status
ofStorageCluster
isReady
and has a green tick. - To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation for external Ceph storage system.
2.2.1. Applying encryption in-transit on Red Hat Ceph Storage cluster
Procedure
Apply Encryption in-transit settings.
root@ceph-client ~]# ceph config set global ms_client_mode secure [root@ceph-client ~]# ceph config set global ms_cluster_mode secure [root@ceph-client ~]# ceph config set global ms_service_mode secure [root@ceph-client ~]# ceph config set global rbd_default_map_options ms_mode=secure
Check the settings.
[root@ceph-client ~]# ceph config dump | grep ms_ global basic ms_client_mode secure * global basic ms_cluster_mode secure * global basic ms_service_mode secure * global advanced rbd_default_map_options ms_mode=secure *
Restart all Ceph daemons.
[root@ceph-client ~]# ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.osd-0 osd-0 *:9093,9094 running (7h) 5m ago 7h 24.6M - 0.24.0 3d2ad4f34549 6ef813aed5ef ceph-exporter.osd-0 osd-0 running (7h) 5m ago 7h 17.7M - 18.2.0-192.el9cp 6e4e34f038b9 179301cc7840 ceph-exporter.osd-1 osd-1 running (7h) 5m ago 7h 17.8M - 18.2.0-192.el9cp 6e4e34f038b9 1084517c5d27 ceph-exporter.osd-2 osd-2 running (7h) 5m ago 7h 17.9M - 18.2.0-192.el9cp 6e4e34f038b9 c933e31dc7b7 ceph-exporter.osd-3 osd-3 running (7h) 5m ago 7h 17.7M - 18.2.0-192.el9cp 6e4e34f038b9 9981004a7169 crash.osd-0 osd-0 running (7h) 5m ago 7h 6895k - 18.2.0-192.el9cp 6e4e34f038b9 9276199810a6 crash.osd-1 osd-1 running (7h) 5m ago 7h 6895k - 18.2.0-192.el9cp 6e4e34f038b9 43aee09f1f00 crash.osd-2 osd-2 running (7h) 5m ago 7h 6903k - 18.2.0-192.el9cp 6e4e34f038b9 adba2172546d crash.osd-3 osd-3 running (7h) 5m ago 7h 6899k - 18.2.0-192.el9cp 6e4e34f038b9 3a788ea496f3 grafana.osd-0 osd-0 *:3000 running (7h) 5m ago 7h 65.5M - <unknown> f142b583a1b1 c299328455cc mds.fsvol001.osd-0.lpciqk osd-0 running (7h) 5m ago 7h 24.8M - 18.2.0-192.el9cp 6e4e34f038b9 8790381f177c mds.fsvol001.osd-2.wocnxz osd-2 running (7h) 5m ago 7h 32.1M - 18.2.0-192.el9cp 6e4e34f038b9 2c66e36e19fc mgr.osd-0.dtkyni osd-0 *:9283,8765,8443 running (7h) 5m ago 7h 535M - 18.2.0-192.el9cp 6e4e34f038b9 41f5bed2d18a mgr.osd-2.kqcxwu osd-2 *:8443,9283,8765 running (7h) 5m ago 7h 440M - 18.2.0-192.el9cp 6e4e34f038b9 d8413a809b1f mon.osd-1 osd-1 running (7h) 5m ago 7h 350M 2048M 18.2.0-192.el9cp 6e4e34f038b9 fb3b5c186e38 mon.osd-2 osd-2 running (7h) 5m ago 7h 363M 2048M 18.2.0-192.el9cp 6e4e34f038b9 f5314c164e89 mon.osd-3 osd-3 running (7h) 5m ago 7h 361M 2048M 18.2.0-192.el9cp 6e4e34f038b9 3522f972ed7d node-exporter.osd-0 osd-0 *:9100 running (7h) 5m ago 7h 25.1M - 1.4.0 508050f8c316 43845647bc06 node-exporter.osd-1 osd-1 *:9100 running (7h) 5m ago 7h 21.4M - 1.4.0 508050f8c316 e84c3e2206c9 node-exporter.osd-2 osd-2 *:9100 running (7h) 5m ago 7h 25.4M - 1.4.0 508050f8c316 071580052c80 node-exporter.osd-3 osd-3 *:9100 running (7h) 5m ago 7h 21.8M - 1.4.0 508050f8c316 317205f34647 osd.0 osd-2 running (7h) 5m ago 7h 525M 4096M 18.2.0-192.el9cp 6e4e34f038b9 5247dd9d7ac3 osd.1 osd-0 running (7h) 5m ago 7h 652M 4096M 18.2.0-192.el9cp 6e4e34f038b9 17c66fee9f13 osd.2 osd-3 running (7h) 5m ago 7h 801M 1435M 18.2.0-192.el9cp 6e4e34f038b9 39b272b56fbe osd.3 osd-1 running (7h) 5m ago 7h 538M 923M 18.2.0-192.el9cp 6e4e34f038b9 f595858a1ca3 osd.4 osd-0 running (7h) 5m ago 7h 532M 4096M 18.2.0-192.el9cp 6e4e34f038b9 c4f57cc9eda6 osd.5 osd-2 running (7h) 5m ago 7h 761M 4096M 18.2.0-192.el9cp 6e4e34f038b9 d80ba180c940 osd.6 osd-3 running (7h) 5m ago 7h 415M 1435M 18.2.0-192.el9cp 6e4e34f038b9 9ec319187e25 osd.7 osd-1 running (7h) 5m ago 7h 427M 923M 18.2.0-192.el9cp 6e4e34f038b9 816731470d87 prometheus.osd-0 osd-0 *:9095 running (7h) 5m ago 7h 84.0M - 2.39.1 716dd9df3cf3 29db12cb1a5a rgw.rgw.ssl.osd-1.smzpfj osd-1 *:80 running (7h) 5m ago 7h 110M - 18.2.0-192.el9cp 6e4e34f038b9 57faaff4e425
Wait for the restarting of all the daemons.
2.3. Verifying your OpenShift Data Foundation installation for external Ceph storage system
Use this section to verify that OpenShift Data Foundation is deployed correctly.
2.3.1. Verifying the state of the pods
- Click Workloads → Pods from the left pane of the OpenShift Web Console.
Select
openshift-storage
from the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, “Pods corresponding to OpenShift Data Foundation components”
Verify that the following pods are in running state:
Table 2.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*
(1 pod on any worker node) -
ocs-metrics-exporter-*
(1 pod on any worker node) -
odf-operator-controller-manager-*
(1 pod on any worker node) -
odf-console-*
(1 pod on any worker node) -
csi-addons-controller-manager-*
(1 pod on any worker node)
Rook-ceph Operator
rook-ceph-operator-*
(1 pod on any worker node)
Multicloud Object Gateway
-
noobaa-operator-*
(1 pod on any worker node) -
noobaa-core-*
(1 pod on any worker node) -
noobaa-db-pg-*
(1 pod on any worker node) -
noobaa-endpoint-*
(1 pod on any worker node)
CSI
cephfs
-
csi-cephfsplugin-*
(1 pod on each worker node) -
csi-cephfsplugin-provisioner-*
(2 pods distributed across worker nodes)
-
NoteIf an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created.
rbd
-
csi-rbdplugin-*
(1 pod on each worker node) -
csi-rbdplugin-provisioner-*
(2 pods distributed across worker nodes)
-
-
2.3.2. Verifying that the OpenShift Data Foundation cluster is healthy
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick.
- In the Details card, verify that the cluster information is displayed.
For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
2.3.3. Verifying that the Multicloud Object Gateway is healthy
- In the OpenShift Web Console, click Storage → Data Foundation.
In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed.
The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation.
2.3.4. Verifying that the storage classes are created and listed
- Click Storage → Storage Classes from the left pane of the OpenShift Web Console.
Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:
-
ocs-external-storagecluster-ceph-rbd
-
ocs-external-storagecluster-ceph-rgw
-
ocs-external-storagecluster-cephfs
-
openshift-storage.noobaa.io
-
-
If an MDS is not deployed in the external cluster,
ocs-external-storagecluster-cephfs
storage class will not be created. -
If RGW is not deployed in the external cluster, the
ocs-external-storagecluster-ceph-rgw
storage class will not be created.
For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation
2.3.5. Verifying that Ceph cluster is connected
Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster.
$ oc get cephcluster -n openshift-storage NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL ocs-external-storagecluster-cephcluster 30m Connected Cluster connected successfully HEALTH_OK true
2.3.6. Verifying that storage cluster is ready
Run the following command to verify if the storage cluster is ready and the External
option is set to true
.
$ oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 30m Ready true 2021-11-17T09:09:52Z 4.18.0
2.3.7. Verifying the the creation of Ceph Object Store CRD
Run the following command to verify if the Ceph Object Store CRD is created in the external Red Hat Ceph Storage cluster.
$ oc get cephobjectstore -n openshift-storage" NAME PHASE ENDPOINT SECUREENDPOINT AGE object-store1 Ready <http://IP/FQDN:port> 15m