Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 6. Updating the OpenShift Data Foundation external secret


Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation.

Note

Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.17.x to 4.17.y.

Prerequisites

  • Update the OpenShift Container Platform cluster to the latest stable release of 4.17.z, see Updating Clusters.
  • Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name.

    • On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy.
    • Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
  • Red Hat Ceph Storage must have a Ceph dashboard installed and configured.

Procedure

  1. Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using one of the following methods, either CSV or ConfigMap.

    Important

    Downloading the ceph-external-cluster-details-exporter.py python script using CSV will no longer be supported from version OpenShift Data Foundation 4.19 and onward. Using the ConfigMap will be the only supported method.

    CSV

    # oc get csv $(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py
    Copy to Clipboard Toggle word wrap

    ConfigMap

    # oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.py
    Copy to Clipboard Toggle word wrap
  2. Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this.

    # python3 ceph-external-cluster-details-exporter.py --upgrade
    Copy to Clipboard Toggle word wrap

    The updated permissions for the user are set as:

    client.csi-cephfs-node
    key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ==
    caps: [mds] allow rw
    caps: [mgr] allow rw
    caps: [mon] allow r, allow command 'osd blocklist'
    caps: [osd] allow rw tag cephfs =
    client.csi-cephfs-provisioner
    key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ==
    caps: [mgr] allow rw
    caps: [mon] allow r, allow command 'osd blocklist'
    caps: [osd] allow rw tag cephfs metadata=*
    client.csi-rbd-node
    key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA==
    caps: [mon] profile rbd, allow command 'osd blocklist'
    caps: [osd] profile rbd
    client.csi-rbd-provisioner
    key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ==
    caps: [mgr] allow rw
    caps: [mon] profile rbd, allow command 'osd blocklist'
    caps: [osd] profile rbd
    Copy to Clipboard Toggle word wrap
  3. Run the previously downloaded python script using one of the following options based on the method you used during deployment, either a configuration file or command-line flags.

    1. Configuration file

      Create a config.ini file that includes all of the parameters used during initial deployment. Run the following command to get the configmap output which contains those parameters:

      $ oc get configmap  -namespace openshift-storage external-cluster-user-command --output jsonpath='{.data.args}'
      Copy to Clipboard Toggle word wrap

      Add the parameters from the previous output to the config.ini file. You can add additional parameters to the config.ini file to those used during deployment. See Table 6.1, “Mandatory and optional parameters used during upgrade” for descriptions of the parameters.

      Example config.ini file:

      [Configurations]
      format = bash
      cephfs-filesystem-name = <filesystem-name>
      rbd-data-pool-name = <pool_name>
      ...
      Copy to Clipboard Toggle word wrap

      Run the python script:

      # python3 ceph-external-cluster-details-exporter.py --config-file <config-file>
      Copy to Clipboard Toggle word wrap

      Replace <config-file> with the path to the config.ini file.

    2. Command-line flags

      Run the previously downloaded python script and pass the parameters for your deployment. Make sure to use all the flags that you used in the original deployment including any optional argument that you have used. You can also add additional flags to those used during deployment. See Table 6.1, “Mandatory and optional parameters used during upgrade” for descriptions of the parameters.

      # python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name _<rbd block pool name>_ --monitoring-endpoint _<ceph mgr prometheus exporter endpoint>_ --monitoring-endpoint-port _<ceph mgr prometheus exporter port>_ --rgw-endpoint _<rgw endpoint>_ --run-as-user _<ocs_client_name>_  [optional arguments]
      Copy to Clipboard Toggle word wrap
      Expand
      Table 6.1. Mandatory and optional parameters used during upgrade
      ParameterDescription

      rbd-data-pool-name

      (Mandatory) Used for providing block storage in OpenShift Data Foundation.

      rgw-endpoint

      (Optional) Provide this parameter if object storage is to be provisioned through Ceph RADOS Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port>.

      monitoring-endpoint

      (Optional) Accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.

      monitoring-endpoint-port

      (Optional) The port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint. If not provided, the value is automatically populated.

      run-as-user

      (Mandatory) The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set.

      rgw-pool-prefix

      (Optional) The prefix of the RGW pools. If not specified, the default prefix is default.

      rgw-tls-cert-path

      (Optional) The file path of the RADOS Gateway endpoint TLS certificate.

      rgw-skip-tls

      (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED).

      ceph-conf

      (Optional) The name of the Ceph configuration file.

      cluster-name

      (Optional) The Ceph cluster name.

      output

      (Optional) The file where the output is required to be stored.

      cephfs-metadata-pool-name

      (Optional) The name of the CephFS metadata pool.

      cephfs-data-pool-name

      (Optional) The name of the CephFS data pool.

      cephfs-filesystem-name

      (Optional) The name of the CephFS filesystem.

      rbd-metadata-ec-pool-name

      (Optional) The name of the erasure coded RBD metadata pool.

      dry-run

      (Optional) This parameter helps to print the executed commands without running them.

  4. Save the JSON output generated after running the script in the previous step.

    Example output:

    [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
    Copy to Clipboard Toggle word wrap
  5. Upload the generated JSON file.

    1. Log in to the OpenShift Web Console.
    2. Click Workloads Secrets.
    3. Set project to openshift-storage.
    4. Click rook-ceph-external-cluster-details.
    5. Click Actions (⋮) Edit Secret.
    6. Click Browse and upload the JSON file.
    7. Click Save.

Verification steps

  • To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name.

    • On the Overview Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy.
    • Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
  • If verification steps fail, contact Red Hat Support.
Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat