Deploying OpenShift Data Foundation using Microsoft Azure


Red Hat OpenShift Data Foundation 4.19

Instructions on deploying OpenShift Data Foundation using Microsoft Azure

Red Hat Storage Documentation Team

Abstract

Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Microsoft Azure.

Providing feedback on Red Hat documentation

We appreciate your input on our documentation. Do let us know how we can make it better.

To give feedback, create a Jira ticket:

  1. Log in to the Jira.
  2. Click Create in the top navigation bar
  3. Enter a descriptive title in the Summary field.
  4. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
  5. Select Documentation in the Components field.
  6. Click Create at the bottom of the dialogue.

Preface

Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Azure clusters.

Note

Only internal OpenShift Data Foundation clusters are supported on Microsoft Azure. See Planning your deployment for more information about deployment requirements.

To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement:

Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications.

Before you begin the deployment of OpenShift Data Foundation, follow these steps:

  1. Setup a chrony server. See Configuring chrony time service and use knowledgebase solution to create rules allowing all traffic.
  2. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps:

  3. Minimum starting node requirements

    An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide.

  4. Disaster recovery requirements

    Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution:

    For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.

You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Microsoft Azure installer-provisioned infrastructure (IPI) (type: managed-csi) that enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications.

Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway.

Note

Only internal OpenShift Data Foundation clusters are supported on Microsoft Azure. See Planning your deployment for more information about deployment requirements.

Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices:

You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions.
  • You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster.
  • For additional resource requirements, see the Planning your deployment guide.
Important
  • When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case):

    $ oc annotate namespace openshift-storage openshift.io/node-selector=
    Copy to Clipboard Toggle word wrap
  • Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators → OperatorHub.
  3. Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator.
  4. Click Install.
  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.19.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storage does not exist, it is created during the operator installation.
    4. Select Approval Strategy as Automatic or Manual.

      If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

      If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

    5. Ensure that the Enable option is selected for the Console plugin.
    6. Click Install.

Verification steps

  • After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
  • In the Web Console:

    • Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
    • Navigate to Storage and verify if the Data Foundation dashboard is available.

You can enable the key value backend path and policy in the vault for token authentication.

Prerequisites

  • Administrator access to the vault.
  • A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
  • Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later.

Procedure

  1. Enable the Key/Value (KV) backend path in the vault.

    For vault KV secret engine API, version 1:

    $ vault secrets enable -path=odf kv
    Copy to Clipboard Toggle word wrap

    For vault KV secret engine API, version 2:

    $ vault secrets enable -path=odf kv-v2
    Copy to Clipboard Toggle word wrap
  2. Create a policy to restrict the users to perform a write or delete operation on the secret:

    echo '
    path "odf/*" {
      capabilities = ["create", "read", "update", "delete", "list"]
    }
    path "sys/mounts" {
    capabilities = ["read"]
    }'| vault policy write odf -
    Copy to Clipboard Toggle word wrap
  3. Create a token that matches the above policy:

    $ vault token create -policy=odf -format json
    Copy to Clipboard Toggle word wrap

You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS).

Prerequisites

  • Administrator access to Vault.
  • A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
  • The OpenShift Data Foundation operator must be installed from the Operator Hub.
  • Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later.

Procedure

  1. Create a service account:

    $ oc -n openshift-storage create serviceaccount <serviceaccount_name>
    Copy to Clipboard Toggle word wrap

    where, <serviceaccount_name> specifies the name of the service account.

    For example:

    $ oc -n openshift-storage create serviceaccount odf-vault-auth
    Copy to Clipboard Toggle word wrap
  2. Create clusterrolebindings and clusterroles:

    $ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_
    Copy to Clipboard Toggle word wrap

    For example:

    $ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth
    Copy to Clipboard Toggle word wrap
  3. Create a secret for the serviceaccount token and CA certificate.

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: odf-vault-auth-token
      namespace: openshift-storage
      annotations:
        kubernetes.io/service-account.name: <serviceaccount_name>
    type: kubernetes.io/service-account-token
    data: {}
    EOF
    Copy to Clipboard Toggle word wrap

    where, <serviceaccount_name> is the service account created in the earlier step.

  4. Get the token and the CA certificate from the secret.

    $ SA_JWT_TOKEN=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['token']}" | base64 --decode; echo)
    $ SA_CA_CRT=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
    Copy to Clipboard Toggle word wrap
  5. Retrieve the OCP cluster endpoint.

    $ OCP_HOST=$(oc config view --minify --flatten -o jsonpath="{.clusters[0].cluster.server}")
    Copy to Clipboard Toggle word wrap
  6. Fetch the service account issuer:

    $ oc proxy &
    $ proxy_pid=$!
    $ issuer="$( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)"
    $ kill $proxy_pid
    Copy to Clipboard Toggle word wrap
  7. Use the information collected in the previous step to setup the Kubernetes authentication method in Vault:

    $ vault auth enable kubernetes
    Copy to Clipboard Toggle word wrap
    $ vault write auth/kubernetes/config \
              token_reviewer_jwt="$SA_JWT_TOKEN" \
              kubernetes_host="$OCP_HOST" \
              kubernetes_ca_cert="$SA_CA_CRT" \
              issuer="$issuer"
    Copy to Clipboard Toggle word wrap
    Important

    To configure the Kubernetes authentication method in Vault when the issuer is empty:

    $ vault write auth/kubernetes/config \
              token_reviewer_jwt="$SA_JWT_TOKEN" \
              kubernetes_host="$OCP_HOST" \
              kubernetes_ca_cert="$SA_CA_CRT"
    Copy to Clipboard Toggle word wrap
  8. Enable the Key/Value (KV) backend path in Vault.

    For Vault KV secret engine API, version 1:

    $ vault secrets enable -path=odf kv
    Copy to Clipboard Toggle word wrap

    For Vault KV secret engine API, version 2:

    $ vault secrets enable -path=odf kv-v2
    Copy to Clipboard Toggle word wrap
  9. Create a policy to restrict the users to perform a write or delete operation on the secret:

    echo '
    path "odf/*" {
      capabilities = ["create", "read", "update", "delete", "list"]
    }
    path "sys/mounts" {
    capabilities = ["read"]
    }'| vault policy write odf -
    Copy to Clipboard Toggle word wrap
  10. Generate the roles:

    $ vault write auth/kubernetes/role/odf-rook-ceph-op \
            bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa \
            bound_service_account_namespaces=openshift-storage \
            policies=odf \
            ttl=1440h
    Copy to Clipboard Toggle word wrap

    The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system.

    $ vault write auth/kubernetes/role/odf-rook-ceph-osd \
            bound_service_account_names=rook-ceph-osd \
            bound_service_account_namespaces=openshift-storage \
            policies=odf \
            ttl=1440h
    Copy to Clipboard Toggle word wrap

Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS.

2.3.1.1. Enabling key rotation

To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims, Namespace, or StorageClass (in the decreasing order of precedence).

<value> can be @hourly, @daily, @weekly, @monthly, or @yearly. If <value> is empty, the default is @weekly. The below examples use @weekly.

Important

Key rotation is only supported for RBD backed volumes.

Annotating Namespace

$ oc get namespace default
NAME      STATUS   AGE
default   Active   5d2h
Copy to Clipboard Toggle word wrap
$ oc annotate namespace default "keyrotation.csiaddons.openshift.io/schedule=@weekly"
namespace/default annotated
Copy to Clipboard Toggle word wrap

Annotating StorageClass

$ oc get storageclass rbd-sc
NAME       PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rbd-sc     rbd.csi.ceph.com   Delete          Immediate           true                   5d2h
Copy to Clipboard Toggle word wrap
$ oc annotate storageclass rbd-sc "keyrotation.csiaddons.openshift.io/schedule=@weekly"
storageclass.storage.k8s.io/rbd-sc annotated
Copy to Clipboard Toggle word wrap

Annotating PersistentVolumeClaims

$ oc get pvc data-pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data-pvc  Bound    pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74   1Gi        RWO            default           20h
Copy to Clipboard Toggle word wrap
$ oc annotate pvc data-pvc "keyrotation.csiaddons.openshift.io/schedule=@weekly"
persistentvolumeclaim/data-pvc annotated
Copy to Clipboard Toggle word wrap
$ oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io
NAME                    SCHEDULE    SUSPEND   ACTIVE   LASTSCHEDULE   AGE
data-pvc-1642663516   @weekly                                     3s
Copy to Clipboard Toggle word wrap
$ oc annotate pvc data-pvc "keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *" --overwrite=true
persistentvolumeclaim/data-pvc annotated
Copy to Clipboard Toggle word wrap
$ oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io
NAME                  SCHEDULE    SUSPEND   ACTIVE   LASTSCHEDULE   AGE
data-pvc-1642664617   */1 * * * *                                   3s
Copy to Clipboard Toggle word wrap
2.3.1.2. Disabling key rotation

You can disable key rotation for the following:

  • All the persistent volume claims (PVCs) of storage class
  • A specific PVC

Disabling key rotation for all PVCs of a storage class

To disable key rotation for all PVCs, update the annotation of the storage class:

$ oc get storageclass rbd-sc
NAME       PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rbd-sc     rbd.csi.ceph.com   Delete          Immediate           true                   5d2h
Copy to Clipboard Toggle word wrap
$ oc annotate storageclass rbd-sc "keyrotation.csiaddons.openshift.io/enable: false"
storageclass.storage.k8s.io/rbd-sc annotated
Copy to Clipboard Toggle word wrap

Disabling key rotation for a specific persistent volume claim

  1. Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on:

    $ oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim=="<PVC_NAME>")]}{.metadata.name}{"\n"}{end}'
    Copy to Clipboard Toggle word wrap

    Where <PVC_NAME> is the name of the PVC that you want to disable.

  2. Apply the following to the EncryptionKeyRotationCronJob CR from the previous step to disable the key rotation:

    1. Update the csiaddons.openshift.io/state annotation from managed to unmanaged:

      $ oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> "csiaddons.openshift.io/state=unmanaged" --overwrite=true
      Copy to Clipboard Toggle word wrap

      Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR.

    2. Add suspend: true under the spec field:

      $ oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{"spec": {"suspend": true}}' --type=merge.
      Copy to Clipboard Toggle word wrap
  3. Save and exit. The key rotation will be disabled for the PVC.

2.4. Creating OpenShift Data Foundation cluster

Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator.

Prerequisites

Procedure

  1. In the OpenShift Web Console, click StorageData FoundationStorage SystemsCreate StorageSystem.
  2. In the Backing storage page, select the following:

    1. Select Full Deployment for the Deployment type option.
    2. Select the Use an existing StorageClass option.
    3. Select the Storage Class.

      By default, it is set to managed-csi.

    4. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview].

      This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure.

      Important

      OpenShift Data Foundation ships PostgreSQL images maintained by Red Hat, which are used to store metadata for the Multicloud Object Gateway. This PostgreSQL usage is at the application level.

      As a result, OpenShift Data Foundation does not perform database-level optimizations or in-depth insights.

      If customers have their own PostgreSQL that is well-maintained and optimized, we recommend using it. OpenShift Data Foundation supports external PostgreSQL instances.

      Any PostgreSQL-related issues requiring code changes or deep technical analysis may need to be addressed upstream. This could result in longer resolution times.

      1. Provide the following connection details:

        • Username
        • Password
        • Server name and Port
        • Database name
      2. Select Enable TLS/SSL checkbox to enable encryption for the Postgres server.
    5. Click Next.
  3. In the Capacity and nodes page, provide the necessary information:

    1. Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default.

      Note

      Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage).

    2. In the Select Nodes section, select at least three available nodes.
    3. In the Configure performance section, select one of the following performance profiles:

      • Lean

        Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory.

      • Balanced (default)

        Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads.

      • Performance

        Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads.

        Note

        You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab.

        Important

        Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures.

        For more information about resource requirements, see Resource requirement for performance profiles.

    4. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation.

      For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones.

      If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide.

    5. Optional: Select the Enable automatic capacity scaling for your cluster checkbox.

      When automatic capacity scaling is enabled, additional raw capacity equivalent to the configured deployment size is automatically added to the cluster when used capacity reaches 70%. This ensures your deployment scales seamlessly to meet demand.

      This option is disabled in lean profile mode, LSO deployment, and external mode deployment.

      Important

      This may incur additional costs for the underlying storage.

      1. Set the cluster expansion limit from the dropdown. This is the maximum the cluster can expand in the cloud. Automatic scaling is suspended if this limit is exceeded.
    6. Click Next.
  4. Optional: In the Security and network page, configure the following based on your requirements:

    1. To enable encryption, select Enable data encryption for block and file storage.

      1. Select either one or both the encryption levels:

        • Cluster-wide encryption

          Encrypts the entire cluster (block and file).

        • StorageClass encryption

          Creates encrypted persistent volume (block only) using encryption enabled storage class.

      2. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.

        1. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details:

          • Vault

            1. Select an Authentication Method.

              • Using Token authentication method

                • Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
                • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

                  • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
                • Optional: Enter TLS Server Name and Vault Enterprise Namespace.
                • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key.
                • Click Save.
              • Using Kubernetes authentication method

                • Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
                • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

                  • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
                  • Optional: Enter TLS Server Name, Authentication Path, and Vault Enterprise Namespace if applicable.
                  • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
                • Click Save.

                  Note

                  In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created:

                  $ oc patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{"op": "add", "path":"/spec/encryption/keyRotation/enable", "value": true}]'
                  Copy to Clipboard Toggle word wrap
          • Thales CipherTrust Manager (using KMIP)

            1. Enter a unique Connection Name for the Key Management service within the project.
            2. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:

              • Address: 123.34.3.2
              • Port: 5696
            3. Upload the Client Certificate, CA certificate, and Client Private Key.
            4. If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
            5. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
          • Azure Key Vault

            For information about setting up client authentication and fetching the client credentials in Azure platform, see the Prerequisites section of this procedure.

            1. Enter a unique Connection name for the key management service within the project.
            2. Enter Azure Vault URL.
            3. Enter Client ID.
            4. Enter Tenant ID.
            5. Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key.
    2. To enable in-transit encryption, select In-transit encryption.

      1. Select a Network.
      2. Click Next.
  5. In the Review and create page, review the configuration details.

    To modify any configuration settings, click Back.

  6. Click Create StorageSystem.
Note

When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert.

Verification steps

  • To verify the final Status of the installed storage cluster:

    1. In the OpenShift Web Console, navigate to StorageData FoundationStorage Systemocs-storagecluster.
    2. Verify that Status of StorageCluster is Ready and has a green tick mark next to it.
  • To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment.

Additional resources

To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.

The Azure Red Hat OpenShift service enables you to deploy fully managed OpenShift clusters. Red Hat OpenShift Data Foundation can be deployed on Azure Red Hat OpenShift service.

Important

OpenShift Data Foundation on Azure Red Hat OpenShift is not a managed service offering. Red Hat OpenShift Data Foundation subscriptions are required to have the installation supported by the Red Hat support team. Open support cases by choosing the product as Red Hat OpenShift Data Foundation with the Red Hat support team (and not Microsoft) if you need any assistance for Red Hat OpenShift Data Foundation on Azure Red Hat OpenShift.

To install OpenShift Data Foundation on Azure Red Hat OpenShift, follow sections:

A Red Hat pull secret enables the cluster to access Red Hat container registries along with additional content.

Prerequisites

  • A Red Hat portal account.
  • OpenShift Data Foundation subscription.

Procedure

To get a Red Hat pull secret for a new deployment of Azure Red Hat OpenShift, follow the steps in the section Get a Red Hat pull secret in the official Microsoft Azure documentation.

Note that while creating the Azure Red Hat OpenShift cluster, you may need larger worker nodes, controlled by --worker-vm-size or more worker nodes, controlled by --worker-count. The recommended worker-vm-size is Standard_D16s_v3. You can also use dedicated worker nodes, for more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and allocating storage resources guide.

When you create an Azure Red Hat OpenShift cluster without adding a Red Hat pull secret, a pull secret is still created on the cluster automatically. However, this pull secret is not fully populated.

Use this section to update the automatically created pull secret with the additional values from the Red Hat pull secret.

Prerequisites

  • Existing Azure Red Hat OpenShift cluster without a Red Hat pull secret.

Procedure

To prepare a Red Hat pull secret for existing an existing Azure Red Hat OpenShift clusters, follow the steps in the section Prepare your pull secret in the official Mircosoft Azure documentation.

3.3. Adding the pull secret to the cluster

Prerequisites

  • A Red Hat pull secret.

Procedure

  • Run the following command to update your pull secret.

    Note

    Running this command causes the cluster nodes to restart one by one as they are updated.

    oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./pull-secret.json
    Copy to Clipboard Toggle word wrap

After the secret is set, you can enable the Red Hat Certified Operators.

To modify the configuration files to enable Red Hat operators, follow the steps in the section Modify the configuration files in the official Microsoft Azure documentation.

After you add the pull secret and modify the configuration files, the cluster can take several minutes to get updated.

To check if the cluster has been updated, run the following command to show the Certified Operators and Red Hat Operators sources available:

$ oc get catalogsource -A
NAMESPACE               NAME                  DISPLAY
openshift-marketplace   redhat-operators      Red Hat Operators

 TYPE   PUBLISHER   AGE
  grpc   Red Hat     11s
Copy to Clipboard Toggle word wrap

If you do not see the Red Hat Operators, wait for a few minutes and try again.

To ensure that your pull secret has been updated and is working correctly, open Operator Hub and check for any Red Hat verified Operator. For example, check if the OpenShift Data Foundation Operator is available, and see if you have permissions to install it.

You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions.
  • You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster.
  • For additional resource requirements, see the Planning your deployment guide.
Important
  • When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case):

    $ oc annotate namespace openshift-storage openshift.io/node-selector=
    Copy to Clipboard Toggle word wrap
  • Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators → OperatorHub.
  3. Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator.
  4. Click Install.
  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.19.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storage does not exist, it is created during the operator installation.
    4. Select Approval Strategy as Automatic or Manual.

      If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

      If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

    5. Ensure that the Enable option is selected for the Console plugin.
    6. Click Install.

Verification steps

  • After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
  • In the Web Console:

    • Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
    • Navigate to Storage and verify if the Data Foundation dashboard is available.

3.6. Creating OpenShift Data Foundation cluster

Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator.

Prerequisites

Procedure

  1. In the OpenShift Web Console, click StorageData FoundationStorage SystemsCreate StorageSystem.
  2. In the Backing storage page, select the following:

    1. Select Full Deployment for the Deployment type option.
    2. Select the Use an existing StorageClass option.
    3. Select the Storage Class.

      By default, it is set to managed-csi.

    4. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview].

      This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure.

      Important

      OpenShift Data Foundation ships PostgreSQL images maintained by Red Hat, which are used to store metadata for the Multicloud Object Gateway. This PostgreSQL usage is at the application level.

      As a result, OpenShift Data Foundation does not perform database-level optimizations or in-depth insights.

      If customers have their own PostgreSQL that is well-maintained and optimized, we recommend using it. OpenShift Data Foundation supports external PostgreSQL instances.

      Any PostgreSQL-related issues requiring code changes or deep technical analysis may need to be addressed upstream. This could result in longer resolution times.

      1. Provide the following connection details:

        • Username
        • Password
        • Server name and Port
        • Database name
      2. Select Enable TLS/SSL checkbox to enable encryption for the Postgres server.
    5. Click Next.
  3. In the Capacity and nodes page, provide the necessary information:

    1. Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default.

      Note

      Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage).

    2. In the Select Nodes section, select at least three available nodes.
    3. In the Configure performance section, select one of the following performance profiles:

      • Lean

        Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory.

      • Balanced (default)

        Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads.

      • Performance

        Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads.

        Note

        You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab.

        Important

        Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures.

        For more information about resource requirements, see Resource requirement for performance profiles.

    4. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation.

      For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones.

      If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide.

    5. Optional: Select the Enable automatic capacity scaling for your cluster checkbox.

      When automatic capacity scaling is enabled, additional raw capacity equivalent to the configured deployment size is automatically added to the cluster when used capacity reaches 70%. This ensures your deployment scales seamlessly to meet demand.

      This option is disabled in lean profile mode, LSO deployment, and external mode deployment.

      Important

      This may incur additional costs for the underlying storage.

      1. Set the cluster expansion limit from the dropdown. This is the maximum the cluster can expand in the cloud. Automatic scaling is suspended if this limit is exceeded.
    6. Click Next.
  4. Optional: In the Security and network page, configure the following based on your requirements:

    1. To enable encryption, select Enable data encryption for block and file storage.

      1. Select either one or both the encryption levels:

        • Cluster-wide encryption

          Encrypts the entire cluster (block and file).

        • StorageClass encryption

          Creates encrypted persistent volume (block only) using encryption enabled storage class.

      2. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.

        1. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details:

          • Vault

            1. Select an Authentication Method.

              • Using Token authentication method

                • Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
                • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

                  • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
                • Optional: Enter TLS Server Name and Vault Enterprise Namespace.
                • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key.
                • Click Save.
              • Using Kubernetes authentication method

                • Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
                • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

                  • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
                  • Optional: Enter TLS Server Name, Authentication Path, and Vault Enterprise Namespace if applicable.
                  • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
                • Click Save.

                  Note

                  In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created:

                  $ oc patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{"op": "add", "path":"/spec/encryption/keyRotation/enable", "value": true}]'
                  Copy to Clipboard Toggle word wrap
          • Thales CipherTrust Manager (using KMIP)

            1. Enter a unique Connection Name for the Key Management service within the project.
            2. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:

              • Address: 123.34.3.2
              • Port: 5696
            3. Upload the Client Certificate, CA certificate, and Client Private Key.
            4. If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
            5. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
          • Azure Key Vault

            For information about setting up client authentication and fetching the client credentials in Azure platform, see the Prerequisites section of this procedure.

            1. Enter a unique Connection name for the key management service within the project.
            2. Enter Azure Vault URL.
            3. Enter Client ID.
            4. Enter Tenant ID.
            5. Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key.
    2. To enable in-transit encryption, select In-transit encryption.

      1. Select a Network.
      2. Click Next.
  5. In the Review and create page, review the configuration details.

    To modify any configuration settings, click Back.

  6. Click Create StorageSystem.
Note

When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert.

Verification steps

  • To verify the final Status of the installed storage cluster:

    1. In the OpenShift Web Console, navigate to StorageData FoundationStorage Systemocs-storagecluster.
    2. Verify that Status of StorageCluster is Ready and has a green tick mark next to it.
  • To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment.

Additional resources

To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.

Use this section to verify that OpenShift Data Foundation is deployed correctly.

4.1. Verifying the state of the pods

Procedure

  1. Click Workloads → Pods from the OpenShift Web Console.
  2. Select openshift-storage from the Project drop-down list.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

    For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table:

  1. Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state:

    Note

    The available pods depend on the cluster configuration. When the cluster is deployed as a standalone Multicloud Object Gateway, the rook-ceph-operator-* pods are not available. Similarly, when the cluster is deployed without the Multicloud Object Gateway, noobaa-* pods are not available.

Expand

Component

Corresponding pods

OpenShift Data Foundation Operator

  • ocs-operator-* (1 pod on any storage node)
  • ocs-metrics-exporter-* (1 pod on any storage node)
  • odf-operator-controller-manager-* (1 pod on any storage node)
  • odf-console-* (1 pod on any storage node)
  • csi-addons-controller-manager-* (1 pod on any storage node)
  • ux-backend-server-* (1 pod on any storage node)
  • * ocs-client-operator-* (1 pod on any storage node)
  • ocs-client-operator-console-* (1 pod on any storage node)
  • ocs-provider-server-* (1 pod on any storage node)

Rook-ceph Operator

rook-ceph-operator-*

(1 pod on any storage node)

Multicloud Object Gateway

  • noobaa-operator-* (1 pod on any storage node)
  • noobaa-core-* (1 pod on any storage node)
  • noobaa-db-pg-cluster-1 and noobaa-db-pg-cluster-2 (2 instances of MCG DB pod on any storage node)
  • noobaa-endpoint-* (1 pod on any storage node)
  • cnpg-controller-manager-* (1 pod on any storage node)

MON

rook-ceph-mon-*

(3 pods distributed across storage nodes)

MGR

rook-ceph-mgr-*

(2 pods on different storage nodes, one active, one standby)

MDS

rook-ceph-mds-ocs-storagecluster-cephfilesystem-*

(2 pods distributed across storage nodes)

CSI

  • cephfs

    • openshift-storage.cephfs.csi.ceph.com-ctrlplugin-* (2 pods distributed across storage nodes)
    • openshift-storage.cephfs.csi.ceph.com-nodeplugin-* (1 pod on any storage nodes)
  • nfs

    • openshift-storage.nfs.csi.ceph.com-ctrlplugin-* (2 pods distributed across storage nodes)
    • openshift-storage.nfs.csi.ceph.com-nodeplugin-* (1 pod on any storage node)
  • rbd

    • openshift-storage.rbd.csi.ceph.com-ctrlplugin-* (2 pods distributed across storage nodes)
    • openshift-storage.rbd.csi.ceph.com-nodeplugin-* (1 pod on any storage node)

rook-ceph-crashcollector

rook-ceph-crashcollector-*

(1 pod on each storage node)

OSD

  • rook-ceph-osd-* (1 pod for each device)
  • rook-ceph-osd-prepare-ocs-* (1 pod for each device)

ceph-csi-operator

ceph-csi-controller-manager-* (1 pod for each device)

Procedure

  1. In the OpenShift Web Console, click StorageData Foundation.
  2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
  3. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick.
  4. In the Details card, verify that the cluster information is displayed.

For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.

Procedure

  1. In the OpenShift Web Console, click StorageData Foundation.
  2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.

    1. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
    2. In the Details card, verify that the MCG information is displayed.

For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation.

Important

To avoid data loss, it is recommended to take a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in the knowledgebase article, Perform a One-Time Backup of the Database for the Multicloud Object Gateway.

Procedure

  1. Click Storage → Storage Classes from the left pane of the OpenShift Web Console.
  2. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:

    • ocs-storagecluster-ceph-rbd
    • ocs-storagecluster-cephfs
    • openshift-storage.noobaa.io

Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser.

Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps:

  • Installing Red Hat OpenShift Data Foundation Operator
  • Creating standalone Multicloud Object Gateway
Important

To avoid data loss, it is recommended to take a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up NooBaa DB, follow the steps in the knowledgebase article, Perform a One-Time Backup of the Database for the Multicloud Object Gateway.

You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions.
  • You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster.
  • For additional resource requirements, see the Planning your deployment guide.
Important
  • When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case):

    $ oc annotate namespace openshift-storage openshift.io/node-selector=
    Copy to Clipboard Toggle word wrap
  • Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators → OperatorHub.
  3. Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator.
  4. Click Install.
  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.19.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storage does not exist, it is created during the operator installation.
    4. Select Approval Strategy as Automatic or Manual.

      If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

      If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

    5. Ensure that the Enable option is selected for the Console plugin.
    6. Click Install.

Verification steps

  • After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
  • In the Web Console:

    • Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
    • Navigate to Storage and verify if the Data Foundation dashboard is available.

You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser.

Prerequisites

  • Ensure that the OpenShift Data Foundation Operator is installed.

Procedure

  1. In the OpenShift Web Console, click StorageData FoundationStorage SystemsCreate StorageSystem.
  2. In the Backing storage page, select the following:

    1. Select Multicloud Object Gateway for Deployment type.
    2. Select the Use an existing StorageClass option.
    3. Click Next.
  3. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.

    1. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
    2. Select an Authentication Method.

      Using Token authentication method
      • Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
      • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

        • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
        • Optional: Enter TLS Server Name and Vault Enterprise Namespace.
        • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
        • Click Save and skip to step iv.
      Using Kubernetes authentication method
      • Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
      • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

        • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
        • Optional: Enter TLS Server Name, Authentication Path, and Vault Enterprise Namespace if applicable.
        • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
        • Click Save and skip to step iv.
    3. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:

      1. Enter a unique Connection Name for the Key Management service within the project.
      2. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:

        • Address: 123.34.3.2
        • Port: 5696
      3. Upload the Client Certificate, CA certificate, and Client Private Key.
      4. If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
      5. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
    4. Select a Network.
    5. Click Next.
  4. In the Review and create page, review the configuration details:

    To modify any configuration settings, click Back.

  5. Click Create StorageSystem.

Verification steps

Verifying that the OpenShift Data Foundation cluster is healthy
  1. In the OpenShift Web Console, click StorageData Foundation.
  2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.

    1. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
    2. In the Details card, verify that the MCG information is displayed.
Verifying the state of the pods
  1. Click WorkloadsPods from the OpenShift Web Console.
  2. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

    Expand
    ComponentCorresponding pods

    OpenShift Data Foundation Operator

    • ocs-operator-* (1 pod on any storage node)
    • ocs-metrics-exporter-* (1 pod on any storage node)
    • odf-operator-controller-manager-* (1 pod on any storage node)
    • odf-console-* (1 pod on any storage node)
    • csi-addons-controller-manager-* (1 pod on any storage node)

    Rook-ceph Operator

    rook-ceph-operator-*

    (1 pod on any storage node)

    Multicloud Object Gateway

    • noobaa-operator-* (1 pod on any storage node)
    • noobaa-core-* (1 pod on any storage node)
    • noobaa-db-pg-cluster-1 and noobaa-db-pg-cluster-2 (2 instances of MCG DB pod on any storage node)
    • noobaa-endpoint-* (1 pod on any storage node)
    • cnpg-controller-manager-* (1 pod on any storage node)

The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether.

Procedure

  1. On the OpenShift Web Console, navigate to StorageData FoundationTopology.

    The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts.

  2. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon.
  3. To view deployment details

    1. Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses.
    2. Click the Back to main view button in the model’s upper left corner to close and return to the previous view.
    3. Select a specific deployment to see more information about it. All relevant data is shown in the side panel.
  4. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting.
  5. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.

Chapter 7. Uninstalling OpenShift Data Foundation

To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat