Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 6. Updating the OpenShift Data Foundation external secret
Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation.
Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.17.x to 4.17.y.
Prerequisites
- Update the OpenShift Container Platform cluster to the latest stable release of 4.17.z, see Updating Clusters.
Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage
Data Foundation Storage Systems tab and then click on the storage system name. - On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy.
- Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
- Red Hat Ceph Storage must have a Ceph dashboard installed and configured.
Procedure
Download the OpenShift Data Foundation version of the
ceph-external-cluster-details-exporter.pypython script using one of the following methods, either CSV or ConfigMap.ImportantDownloading the
ceph-external-cluster-details-exporter.pypython script using CSV will no longer be supported from version OpenShift Data Foundation 4.19 and onward. Using the ConfigMap will be the only supported method.CSV
oc get csv $(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py# oc get csv $(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.pyCopy to Clipboard Copied! Toggle word wrap Toggle overflow ConfigMap
oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.py# oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.pyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update permission caps on the external Red Hat Ceph Storage cluster by running
ceph-external-cluster-details-exporter.pyon any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this.python3 ceph-external-cluster-details-exporter.py --upgrade
# python3 ceph-external-cluster-details-exporter.py --upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow The updated permissions for the user are set as:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the previously downloaded python script using one of the following options based on the method you used during deployment, either a configuration file or command-line flags.
Configuration file
Create a
config.inifile that includes all of the parameters used during initial deployment. Run the following command to get the configmap output which contains those parameters:oc get configmap -namespace openshift-storage external-cluster-user-command --output jsonpath='{.data.args}'$ oc get configmap -namespace openshift-storage external-cluster-user-command --output jsonpath='{.data.args}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the parameters from the previous output to the
config.inifile. You can add additional parameters to theconfig.inifile to those used during deployment. See Table 6.1, “Mandatory and optional parameters used during upgrade” for descriptions of the parameters.Example
config.inifile:[Configurations] format = bash cephfs-filesystem-name = <filesystem-name> rbd-data-pool-name = <pool_name> ...
[Configurations] format = bash cephfs-filesystem-name = <filesystem-name> rbd-data-pool-name = <pool_name> ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the python script:
python3 ceph-external-cluster-details-exporter.py --config-file <config-file>
# python3 ceph-external-cluster-details-exporter.py --config-file <config-file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<config-file>with the path to theconfig.inifile.Command-line flags
Run the previously downloaded python script and pass the parameters for your deployment. Make sure to use all the flags that you used in the original deployment including any optional argument that you have used. You can also add additional flags to those used during deployment. See Table 6.1, “Mandatory and optional parameters used during upgrade” for descriptions of the parameters.
python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name _<rbd block pool name>_ --monitoring-endpoint _<ceph mgr prometheus exporter endpoint>_ --monitoring-endpoint-port _<ceph mgr prometheus exporter port>_ --rgw-endpoint _<rgw endpoint>_ --run-as-user _<ocs_client_name>_ [optional arguments]
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name _<rbd block pool name>_ --monitoring-endpoint _<ceph mgr prometheus exporter endpoint>_ --monitoring-endpoint-port _<ceph mgr prometheus exporter port>_ --rgw-endpoint _<rgw endpoint>_ --run-as-user _<ocs_client_name>_ [optional arguments]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Table 6.1. Mandatory and optional parameters used during upgrade Parameter Description rbd-data-pool-name
(Mandatory) Used for providing block storage in OpenShift Data Foundation.
rgw-endpoint
(Optional) Provide this parameter if object storage is to be provisioned through Ceph RADOS Gateway for OpenShift Data Foundation. Provide the endpoint in the following format:
<ip_address>:<port>.monitoring-endpoint
(Optional) Accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.
monitoring-endpoint-port
(Optional) The port associated with the ceph-mgr Prometheus exporter specified by
--monitoring-endpoint. If not provided, the value is automatically populated.run-as-user
(Mandatory) The client name used during OpenShift Data Foundation cluster deployment. Use the default client name
client.healthcheckerif a different client name was not set.rgw-pool-prefix
(Optional) The prefix of the RGW pools. If not specified, the default prefix is
default.rgw-tls-cert-path
(Optional) The file path of the RADOS Gateway endpoint TLS certificate.
rgw-skip-tls
(Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED).
ceph-conf
(Optional) The name of the Ceph configuration file.
cluster-name
(Optional) The Ceph cluster name.
output
(Optional) The file where the output is required to be stored.
cephfs-metadata-pool-name
(Optional) The name of the CephFS metadata pool.
cephfs-data-pool-name
(Optional) The name of the CephFS data pool.
cephfs-filesystem-name
(Optional) The name of the CephFS filesystem.
rbd-metadata-ec-pool-name
(Optional) The name of the erasure coded RBD metadata pool.
dry-run
(Optional) This parameter helps to print the executed commands without running them.
Save the JSON output generated after running the script in the previous step.
Example output:
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}][{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the generated JSON file.
- Log in to the OpenShift Web Console.
-
Click Workloads
Secrets. -
Set project to
openshift-storage. - Click rook-ceph-external-cluster-details.
-
Click Actions (⋮)
Edit Secret. - Click Browse and upload the JSON file.
- Click Save.
Verification steps
To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage
Data Foundation Storage Systems tab and then click on the storage system name. -
On the Overview
Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. - Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
-
On the Overview
- If verification steps fail, contact Red Hat Support.