Chapter 7. Adding file and object storage to an existing external OpenShift Data Foundation cluster
When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims.
- Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster.
- Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster.
- Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster.
Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage.
Prerequisites
-
OpenShift Data Foundation 4.17 is installed and running on the OpenShift Container Platform version 4.17 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the
Ready
state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following:
- a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage
- a Metadata Server (MDS) pool for file storage
-
Ensure that you know the parameters used with the
ceph-external-cluster-details-exporter.py
script during external OpenShift Data Foundation cluster deployment.
Procedure
Download the OpenShift Data Foundation version of the
ceph-external-cluster-details-exporter.py
python script using the following command:oc get csv $(oc get csv -n openshift-storage | grep ocs-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\.features\.ocs\.openshift\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py
Update permission caps on the external Red Hat Ceph Storage cluster by running
ceph-external-cluster-details-exporter.py
on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this.# python3 ceph-external-cluster-details-exporter.py --upgrade \ --run-as-user=ocs-client-name \ --rgw-pool-prefix rgw-pool-prefix
--run-as-user
-
The client name used during OpenShift Data Foundation cluster deployment. Use the default client name
client.healthchecker
if a different client name was not set. --rgw-pool-prefix
- The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used.
Generate and save configuration details from the external Red Hat Ceph Storage cluster.
Generate configuration details by running
ceph-external-cluster-details-exporter.py
on any client node in the external Red Hat Ceph Storage cluster.# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix
--monitoring-endpoint
- Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.
--monitoring-endpoint-port
-
Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by
--monitoring-endpoint
. If not provided, the value is automatically populated. --run-as-user
- The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set.
--rgw-endpoint
- Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter)
--rgw-pool-prefix
- The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used.
User permissions are updated as shown:
caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index
NoteEnsure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode.
Save the output of the script in an
external-cluster-config.json
file.The following example output shows the generated configuration changes in bold text.
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
Upload the generated JSON file.
- Log in to the OpenShift web console.
-
Click Workloads
Secrets. -
Set project to
openshift-storage
. - Click on rook-ceph-external-cluster-details.
-
Click Actions (⋮)
Edit Secret -
Click Browse and upload the
external-cluster-config.json
file. - Click Save.
Verification steps
To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage
Data foundation Storage Systems tab and then click on the storage system name. -
On the Overview
Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy.
-
On the Overview
If you added a Metadata Server for file storage:
-
Click Workloads
Pods and verify that csi-cephfsplugin-*
pods are created new and are in the Running state. -
Click Storage
Storage Classes and verify that the ocs-external-storagecluster-cephfs
storage class is created.
-
Click Workloads
If you added the Ceph Object Gateway for object storage:
-
Click Storage
Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw
storage class is created. -
To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage
Data foundation Storage Systems tab and then click on the storage system name. - Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy.
-
Click Storage