Chapter 4. Creating an OpenShift Container Storage Cluster service for external mode
You need to create a new OpenShift Container Storage cluster service after you install OpenShift Container Storage operator on OpenShift Container Platform deployed on VMware vSphere or user-provisioned bare metal infrastructures.
Prerequisites
- Starting with OpenShift Container Storage 4.7, the OpenShift Container Platform version must be 4.7 or above.
- OpenShift Container Storage operator must be installed. For more information, see Installing OpenShift Container Storage Operator using the Operator Hub.
Red Hat Ceph Storage version 4.2z1 or later is required for the external cluster. For more information, see this knowledge base article on Red Hat Ceph Storage releases and corresponding Ceph package versions.
If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS PVC creation in external mode.
For more details, see Troubleshooting CephFS PVC creation in external mode.
- Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access.
- Red Hat recommends that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation.
- The external Ceph cluster should have an existing RBD pool pre-configured for use. If it does not exist, contact your Red Hat Ceph Storage administrator to create one before you move ahead with OpenShift Container Storage deployment. Red Hat recommends to use a separate pool for each OpenShift Container Storage cluster.
Procedure
Click Operators
Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage.
-
Click OpenShift Container Storage
Create Instance link of Storage Cluster. Select Mode as External. By default, Internal is selected as deployment mode.
Figure 4.1. Connect to external cluster section on Create Storage Cluster form
- In the Connect to external cluster section, click on the Download Script link to download the python script for extracting Ceph cluster details.
For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with
admin key
.Run the following command on the RHCS node to view the list of available arguments.
# python3 ceph-external-cluster-details-exporter.py --help
ImportantUse
python
instead ofpython3
if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster.NoteYou can also run the script from inside a MON container (containerized deployment) or from a MON node (rpm deployment).
To retrieve the external cluster details from the RHCS cluster, run the following command
# python3 ceph-external-cluster-details-exporter.py \ --rbd-data-pool-name <rbd block pool name> [optional arguments]
For example:
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs
In the above example,
-
--rbd-data-pool-name
is a mandatory parameter used for providing block storage in OpenShift Container Storage. -
--rgw-endpoint
is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Container Storage. Provide the endpoint in the following format:<ip_address>:<port>
-
--monitoring-endpoint
is optional. It is the IP address of the activeceph-mgr
reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. -
--monitoring-endpoint-port
is optional. It is the port associated with theceph-mgr
Prometheus exporter specified by--monitoring-endpoint
. If not provided, the value is automatically populated. -- run-as-user
is an optional parameter used for providing a name for the Ceph user which is created by the script. If this parameter is not specified, a default user nameclient.healthchecker
is created. The permissions for the new user is set as:- caps: [mgr] allow command config
- caps: [mon] allow r, allow command quorum_status, allow command version
caps: [osd] allow rwx pool=
RGW_POOL_PREFIX.rgw.meta
, allow r pool=.rgw.root
, allow rw pool=RGW_POOL_PREFIX.rgw.control
, allow rx pool=RGW_POOL_PREFIX.rgw.log
, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index
Example of JSON output generated using the python script:
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.healthchecker", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "ceph-rbd"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}]
-
Save the JSON output to a file with
.json
extensionNoteFor OpenShift Container Storage to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remains unchanged on the RHCS external cluster after the storage cluster creation.
Click External cluster metadata
Browse to select and upload the JSON file. The content of the JSON file is populated and displayed in the text box.
Figure 4.2. Json file content
Click Create.
The Create button is enabled only after you upload the
.json
file.
Verification steps
Verify that the final Status of the installed storage cluster shows as
Phase: Ready
with a green tick mark.-
Click Operators
Installed Operators Storage Cluster link to view the storage cluster installation status. - Alternatively, when you are on the Operator Details tab, you can click on the Storage Cluster tab to view the status.
-
Click Operators
- To verify that OpenShift Container Storage, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Container Storage installation.