Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Deploying OpenShift Data Foundation external storage cluster
Use this procedure to deploy an external storage cluster to add additional storage or expand your current internal storage cluster.
Prerequisites
- An OpenShift Data Foundation cluster deployed in internal mode.
- Ensure that both the OpenShift container Platform and OpenShift Data Foundation are upgraded to version 4.15.
Procedure
-
In the OpenShift web Console, navigate to Storage
Data Foundation Storage Systems tab - Click Create StorageSystem.
In the Backing storage page, Connect an external storage platform is selected by default.
-
Choose
Red Hat Ceph Storage
as the Storage platform from available options. - Click Next.
-
Choose
In the Security and Network page,
- Optional: To select encryption, select Enable encryption checkbox.
- In the Connection section, click on the Download Script link to download the python script for extracting Ceph cluster details.
For extracting the Red Hat Ceph Storage (RHCS) cluster details, run the downloaded python script on a Red Hat Ceph Storage node with the
admin key
.Run the following command on the RHCS node to view the list of available arguments:
# python3 ceph-external-cluster-details-exporter.py --help
You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment).
NoteUse the
yum install cephadm
command and then thecephadm
command to deploy your RHCS cluster using containers. You must pull the RHCS container images using thecephadm
command, rather than usingyum
for installing the Ceph packages onto nodes. For more information, see RHCS product documentation.To retrieve the external cluster details from the RHCS cluster, run the following command:
# python3 ceph-external-cluster-details-exporter.py \ --rbd-data-pool-name <rbd block pool name> [optional arguments]
For example:
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs
In this example,
rbd-data-pool-name
A mandatory parameter that is used for providing block storage in OpenShift Data Foundation.
rgw-endpoint
(Optional) This parameter is required only if the object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format:
<ip_address>:<port>
NoteA fully-qualified domain name (FQDN) is also supported in the format
<FQDN>:<PORT>
.monitoring-endpoint
(Optional) This parameter accepts comma-separated list of IP addresses of active and standby
mgrs
reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.monitoring-endpoint-port
(Optional) It is the port associated with the
ceph-mgr
Prometheus exporter specified by--monitoring-endpoint
. If not provided, the value is automatically populated.run-as-user
(Optional) This parameter is used for providing name for the Ceph user which is created by the script. If this parameter is not specified, a default user name
client.healthchecker
is created. The permissions for the new user is set as:- caps: [mgr] allow command config
- caps: [mon] allow r, allow command quorum_status, allow command version
-
caps: [osd] allow rwx pool=
RGW_POOL_PREFIX.rgw.meta
, allow r pool=.rgw.root
, allow rw pool=RGW_POOL_PREFIX.rgw.control
, allow rx pool=RGW_POOL_PREFIX.rgw.log
, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index
Additional flags:
rgw-pool-prefix
(Optional) The prefix of the RGW pools. If not specified, the default prefix is
default
.rgw-tls-cert-path
(Optional) The file path of the RADOS Gateway endpoint TLS certificate.
rgw-skip-tls
(Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED).
ceph-conf
(Optional) The name of the Ceph configuration file.
cluster-name
(Optional) The Ceph cluster name.
output
(Optional) The file where the output is required to be stored.
cephfs-metadata-pool-name
(Optional) The name of the CephFS metadata pool.
cephfs-data-pool-name
(Optional) The name of the CephFS data pool.
cephfs-filesystem-name
(Optional) The name of the CephFS filesystem.
rbd-metadata-ec-pool-name
(Optional) The name of the erasure coded RBD metadata pool.
dry-run
(Optional) This parameter helps to print the executed commands without running them.
restricted-auth-permission
(Optional) This parameter restricts
cephCSIKeyrings
auth permissions to specific pools and clusters. Mandatory flags that need to be set with this arerbd-data-pool-name
andcluster-name
. You can also pass thecephfs-filesystem-name
flag if there is CephFS user restriction so that permission is restricted to a particular CephFS filesystem.NoteThis parameter must be applied only for the new deployments. To restrict
csi-users
per pool and per cluster, you need to create newcsi-users
and new secrets for thosecsi-users
.Example with restricted auth permission:
# python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true
Example of JSON output generated using the python script:
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
Save the JSON output to a file with
.json
extensionNoteFor OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation.
Run the command when there is a multi-tenant deployment in which the RHCS cluster is already connected to OpenShift Data Foundation deployment with a lower version.
# python3 ceph-external-cluster-details-exporter.py --upgrade
Click Browse to select and upload the JSON file.
The content of the JSON file is populated and displayed in the text box.
-
Click the Next button, which is enabled after you upload the
.json
file.
In the Review and create page, review the configuration details.
To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
Verification steps
-
Navigate to Storage
Data Foundation Storage Systems tab and verify that you can view all storage clusters. - Verify that all components for the external OpenShift Data Foundation are successfully installed. See Verifying external OpenShift Data Foundation storage deployment for instructions.