Search

Updating OpenShift Data Foundation

download PDF
Red Hat OpenShift Data Foundation 4.17

Instructions for cluster and storage administrators regarding upgrading

Red Hat Storage Documentation Team

Abstract

This document explains how to update previous versions of Red Hat OpenShift Data Foundation.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Providing feedback on Red Hat documentation

We appreciate your input on our documentation. Do let us know how we can make it better.

To give feedback, create a Bugzilla ticket:

  1. Go to the Bugzilla website.
  2. In the Component section, choose documentation.
  3. Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
  4. Click Submit Bug.

Chapter 1. Overview of the OpenShift Data Foundation update process

This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments.

You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.16 and 4.17, or between z-stream updates like 4.16.0 and 4.16.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic.

Extended Update Support (EUS)

EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager.

For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained.

Example workflow of EUS upgrade:

  1. Pause the worker machine pools.
  2. Update OpenShift <4.y> → OpenShift <4.y+1>.
  3. Update OpenShift Data Foundation <4.y> → OpenShift Data Foundation <4.y+1>.
  4. Update OpenShift <4.y+1> → OpenShift <4.y+2>.
  5. Update to OpenShift Data Foundation <4.y+2>.
  6. Unpause the worker machine pools.
Note

You can update to ODF <4.y+2> either before or after worker machine pools are unpaused.

Important

When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker. Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use.

You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments:

  1. Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform.
  2. Update Red Hat OpenShift Data Foundation.

    1. To prepare a disconnected environment for updates, see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use.
    2. For updating between minor releases, see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15.
    3. For updating between z-stream releases, see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y.
    4. For updating external mode deployments, you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret.
    5. If you use local storage, then update the Local Storage operator. See Checking for Local Storage Operator deployments if you are unsure.
Important

If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances.

Update considerations

Review the following important considerations before you begin.

  • The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation.

    See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation.

  • To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode.
  • The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version.
Important

The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article.

Chapter 2. OpenShift Data Foundation upgrade channels and releases

In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. As OpenShift Data Foundation gets deployed as an operator in OpenShift Container Platform, it follows the same strategy to control the pace of upgrades by shipping the fixes in multiple channels. Upgrade channels are tied to a minor version of OpenShift Data Foundation.

For example, OpenShift Data Foundation 4.17 upgrade channels recommend upgrades within 4.17. Upgrades to future releases is not recommended. This strategy ensures that administrators can explicitly decide to upgrade to the next minor version of OpenShift Data Foundation.

Upgrade channels control only release selection and do not impact the version of the cluster that you install; the odf-operator decides the version of OpenShift Data Foundation to be installed. By default, it always installs the latest OpenShift Data Foundation release maintaining the compatibility with OpenShift Container Platform. So, on OpenShift Container Platform 4.17, OpenShift Data Foundation 4.17 will be the latest version which can be installed.

OpenShift Data Foundation upgrades are tied to the OpenShift Container Platform upgrade to ensure that compatibility and interoperability are maintained with the OpenShift Container Platform. For OpenShift Data Foundation 4.17, OpenShift Container Platform 4.17 and 4.17 (when generally available) are supported. OpenShift Container Platform 4.17 is supported to maintain forward compatibility of OpenShift Data Foundation with OpenShift Container Platform. Keep the OpenShift Data Foundation version the same as OpenShift Container Platform in order to get the benefit of all the features and enhancements in that release.

Important

Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.15 to 4.16 and then to 4.17. You cannot update from OpenShift Container Platform 4.16 to 4.17 directly. For more information, see Preparing to perform an EUS-to-EUS update of the Updating clusters guide in OpenShift Container Platform documentation.

OpenShift Data Foundation 4.17 offers the following upgrade channel:

  • stable-4.17
  • stable-4.16

stable-4.17 channel

Once a new version is Generally Available, the stable channel corresponding to the minor version gets updated with the new image which can be used to upgrade. You can use the stable-4.17 channel to upgrade from OpenShift Data Foundation 4.16 and upgrades within 4.17.

stable-4.16

You can use the stable-4.16 channel to upgrade from OpenShift Data Foundation 4.15 and upgrades within 4.16.

Chapter 3. Updating Red Hat OpenShift Data Foundation 4.16 to 4.17

This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what’s not.

  • For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster.
  • For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately.

    You must upgrade Red Hat Ceph Storage along with OpenShift Data Foundation to get new feature support, security fixes, and other bug fixes. As there is no dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. For more information about RHCS releases, see the knowledgebase solution, solution.

Important

Upgrading to 4.17 directly from any version older than 4.16 is not supported.

Prerequisites

  • Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters.
  • Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient.

    • Navigate to Storage → Data Foundation → Storage Systems tab and then click on the storage system name.
    • Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency are all healthy.
  • Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace.

    To view the state of the pods, on the OpenShift Web Console, click Workloads → Pods. Select openshift-storage from the Project drop-down list.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

  • Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster.
  • Prerequisite relevant only for OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS)

    Add another entry in the trust policy for noobaa-core account as follows:

    1. Log into AWS web console where the AWS role resides using http://console.aws.amazon.com/.
    2. Enter the IAM management tool and click Roles.
    3. Find the name of the role created for AWS STS to support Multicloud Object Gateway (MCG) authentication using the following command in OpenShift CLI:

      $ oc get deployment noobaa-operator -o yaml -n openshift-storage | grep ROLEARN -A1
                value: arn:aws:iam::123456789101:role/your-role-name-here
    4. Search for the role name that you obtained from the previous step in the tool and click on the role name.
    5. Under the role summary, click Trust relationships.
    6. In the Trusted entities tab, click Edit trust policy on the right.
    7. Under the “Action”: “sts:AssumeRoleWithWebIdentity” field, there are two fields to enable access for two NooBaa service accounts noobaa and noobaa-endpoint. Add another entry for the core pod’s new service account name, system:serviceaccount:openshift-storage:noobaa-core.
    8. Click Update policy at the bottom right of the page.

      The update might take about 5 minutes to get in place.

Procedure

  1. On the OpenShift Web Console, navigate to Operators → Installed Operators.
  2. Select openshift-storage project.
  3. Click the OpenShift Data Foundation operator name.
  4. Click the Subscription tab and click the link under Update Channel.
  5. Select the stable-4.17 update channel and Save it.
  6. If the Upgrade status shows requires approval, click on requires approval.

    1. On the Install Plan Details page, click Preview Install Plan.
    2. Review the install plan and click Approve.

      Wait for the Status to change from Unknown to Created.

  7. Navigate to Operators → Installed Operators.
  8. Select the openshift-storage project.

    Wait for the OpenShift Data Foundation Operator Status to change to Up to date.

  9. After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
Note

After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert.

Verification steps

  • Check the Version below the OpenShift Data Foundation name and check the operator status.

    • Navigate to Operators → Installed Operators and select the openshift-storage project.
    • When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick.
  • Verify that the OpenShift Data Foundation cluster is healthy and data is resilient.

    • Navigate to StorageData FoundationStorage Systems tab and then click on the storage system name.
    • Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy.
  • If verification steps fail, contact Red Hat Support.
Important

After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret.

Additional Resources

If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide.

Chapter 4. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y

This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what’s not.

  • For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster.
  • For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately.

    Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases.

When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. If the update strategy is set to Manual then use the following procedure.

Prerequisites

  • Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters.
  • Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient.

    • Navigate to Storage → Data Foundation → Storage Systems tab and then click on the storage system name.
    • Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy.
  • Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace.

    To view the state of the pods, on the OpenShift Web Console, click Workloads → Pods. Select openshift-storage from the Project drop-down list.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

  • Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster.

Procedure

  1. On the OpenShift Web Console, navigate to Operators → Installed Operators.
  2. Select openshift-storage project.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

  3. Click the OpenShift Data Foundation operator name.
  4. Click the Subscription tab.
  5. If the Upgrade Status shows require approval, click on requires approval link.
  6. On the InstallPlan Details page, click Preview Install Plan.
  7. Review the install plan and click Approve.
  8. Wait for the Status to change from Unknown to Created.
  9. After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.

Verification steps

  • Check the Version below the OpenShift Data Foundation name and check the operator status.

    • Navigate to Operators → Installed Operators and select the openshift-storage project.
    • When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick.
  • Verify that the OpenShift Data Foundation cluster is healthy and data is resilient.

    • Navigate to Storage → Data Foundation → Storage Systems tab and then click on the storage system name.
    • Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy
  • If verification steps fail, contact Red Hat Support.

Chapter 5. Changing the update approval strategy

To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic. Changing the update approval strategy to Manual will need manual approval for each upgrade.

Procedure

  1. Navigate to Operators → Installed Operators.
  2. Select openshift-storage from the Project drop-down list.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

  3. Click on OpenShift Data Foundation operator name
  4. Go to the Subscription tab.
  5. Click on the pencil icon for changing the Update approval.
  6. Select the update approval strategy and click Save.

Verification steps

  • Verify that the Update approval shows the newly selected approval strategy below it.

Chapter 6. Updating the OpenShift Data Foundation external secret

Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation.

Note

Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.17.x to 4.17.y.

Prerequisites

  • Update the OpenShift Container Platform cluster to the latest stable release of 4.17.z, see Updating Clusters.
  • Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to StorageData FoundationStorage Systems tab and then click on the storage system name.

    • On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy.
    • Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
  • Red Hat Ceph Storage must have a Ceph dashboard installed and configured.

Procedure

  1. Download the ceph-external-cluster-details-exporter.py python script that matches your OpenShift Data Foundation version.

    # oc get csv $(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}'| base64 --decode > ceph-external-cluster-details-exporter.py
  2. Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this.

    # python3 ceph-external-cluster-details-exporter.py --upgrade

    The updated permissions for the user are set as:

    client.csi-cephfs-node
    key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ==
    caps: [mds] allow rw
    caps: [mgr] allow rw
    caps: [mon] allow r, allow command 'osd blocklist'
    caps: [osd] allow rw tag cephfs =
    client.csi-cephfs-provisioner
    key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ==
    caps: [mgr] allow rw
    caps: [mon] allow r, allow command 'osd blocklist'
    caps: [osd] allow rw tag cephfs metadata=*
    client.csi-rbd-node
    key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA==
    caps: [mon] profile rbd, allow command 'osd blocklist'
    caps: [osd] profile rbd
    client.csi-rbd-provisioner
    key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ==
    caps: [mgr] allow rw
    caps: [mon] profile rbd, allow command 'osd blocklist'
    caps: [osd] profile rbd
  3. Run the previously downloaded python script and save the JSON output that gets generated from the external Red Hat Ceph Storage cluster.

    1. Run the previously downloaded python script:

      Note
      • Make sure to use all the flags that you used in the original deployment including any optional argument that you have used.
      • Ensure that all the parameters, including the optional arguments, except for monitoring-endpoint and monitoring-endpoint-port, are the same as that you used during the original deployment of OpenShift Data Foundation in external mode.
      # python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <ocs_client_name>  [optional arguments]
      --rbd-data-pool-name
      Is a mandatory parameter used for providing block storage in OpenShift Data Foundation.
      --rgw-endpoint
      Is optional. Provide this parameter if object storage is to be provisioned through Ceph RADOS Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port>.
      --monitoring-endpoint
      Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.
      --monitoring-endpoint-port
      Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint. If not provided, the value is automatically populated.
      --run-as-user

      The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set.

      Additional flags:

      rgw-pool-prefix

      (Optional) The prefix of the RGW pools. If not specified, the default prefix is default.

      rgw-tls-cert-path

      (Optional) The file path of the RADOS Gateway endpoint TLS certificate.

      rgw-skip-tls

      (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED).

      ceph-conf

      (Optional) The name of the Ceph configuration file.

      cluster-name

      (Optional) The Ceph cluster name.

      output

      (Optional) The file where the output is required to be stored.

      cephfs-metadata-pool-name

      (Optional) The name of the CephFS metadata pool.

      cephfs-data-pool-name

      (Optional) The name of the CephFS data pool.

      cephfs-filesystem-name

      (Optional) The name of the CephFS filesystem.

      rbd-metadata-ec-pool-name

      (Optional) The name of the erasure coded RBD metadata pool.

      dry-run

      (Optional) This parameter helps to print the executed commands without running them.

    2. Save the JSON output generated after running the script in the previous step.

      Example output:

      [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
  4. Upload the generated JSON file.

    1. Log in to the OpenShift Web Console.
    2. Click Workloads → Secrets.
    3. Set project to openshift-storage.
    4. Click rook-ceph-external-cluster-details.
    5. Click Actions (⋮) → Edit Secret.
    6. Click Browse and upload the JSON file.
    7. Click Save.

Verification steps

  • To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to StorageData FoundationStorage Systems tab and then click on the storage system name.

    • On the OverviewBlock and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy.
    • Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
  • If verification steps fail, contact Red Hat Support.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.