Chapter 1. Preparing to deploy OpenShift Data Foundation


Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic or local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications.

Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See Planning your deployment.

  1. For Red Hat Enterprise Linux based hosts for worker nodes in a user provisioned infrastructure (UPI), enable the container access to the underlying file system. Follow the instructions on enable file system access for containers on Red Hat Enterprise Linux based nodes.

    Note

    Skip this step for Red Hat Enterprise Linux CoreOS (RHCOS).

  2. Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS):

  3. Minimum starting node requirements [Technology Preview]

    An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide.

  4. Regional-DR requirements [Developer Preview]

    Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:

    For detailed requirements, see Regional-DR requirements and RHACM requirements.

  5. For deploying using local storage devices, see requirements for installing OpenShift Data Foundation using local storage devices. These are not applicable for deployment using dynamic storage devices.

Deploying OpenShift Data Foundation on an OpenShift Container Platform with worker nodes on a Red Hat Enterprise Linux base in a user provisioned infrastructure (UPI) does not automatically provide container access to the underlying Ceph file system.

Note

Skip this step for hosts based on Red Hat Enterprise Linux CoreOS (RHCOS).

Procedure

  1. Log in to the Red Hat Enterprise Linux based node and open a terminal.
  2. For each node in your cluster:

    1. Verify that the node has access to the rhel-7-server-extras-rpms repository.

      # subscription-manager repos --list-enabled | grep rhel-7-server
      Copy to Clipboard Toggle word wrap

      If you do not see both rhel-7-server-rpms and rhel-7-server-extras-rpms in the output, or if there is no output, run the following commands to enable each repository:

      # subscription-manager repos --enable=rhel-7-server-rpms
      Copy to Clipboard Toggle word wrap
      # subscription-manager repos --enable=rhel-7-server-extras-rpms
      Copy to Clipboard Toggle word wrap
    2. Install the required packages.

      # yum install -y policycoreutils container-selinux
      Copy to Clipboard Toggle word wrap
    3. Persistently enable container use of the Ceph file system in SELinux.

      # setsebool -P container_use_cephfs on
      Copy to Clipboard Toggle word wrap

Prerequisites

  • Administrator access to Vault.
  • Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later.

Procedure

  1. Enable the Key/Value (KV) backend path in Vault.

    For Vault KV secret engine API, version 1:

    $ vault secrets enable -path=odf kv
    Copy to Clipboard Toggle word wrap

    For Vault KV secret engine API, version 2:

    $ vault secrets enable -path=odf kv-v2
    Copy to Clipboard Toggle word wrap
  2. Create a policy to restrict users to perform a write or delete operation on the secret using the following commands.

    echo '
    path "odf/*" {
      capabilities = ["create", "read", "update", "delete", "list"]
    }
    path "sys/mounts" {
    capabilities = ["read"]
    }'| vault policy write odf -
    Copy to Clipboard Toggle word wrap
  3. Create a token matching the above policy.

    $ vault token create -policy=odf -format json
    Copy to Clipboard Toggle word wrap

Node requirements

The cluster must consist of at least three OpenShift Container Platform worker nodes with locally attached-storage devices on each of them.

  • Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Data Foundation.
  • The devices you use must be empty; the disks must not include physical volumes (PVs), volume groups (VGs), or logical volumes (LVs) remaining on the disk.

For more information, see the Resource requirements section in the Planning guide.

Regional-DR requirements [Developer Preview]

Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:

  • A valid Red Hat OpenShift Data Foundation Advanced entitlement
  • A valid Red Hat Advanced Cluster Management for Kubernetes subscription

To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.

For detailed requirements, see Regional-DR requirements and RHACM requirements.

Arbiter stretch cluster requirements [Technology Preview]

In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This is a technology preview feature that is currently intended for deployment in the OpenShift Container Platform on-premises.

For detailed requirements and instructions, see Configuring OpenShift Data Foundation for Metro-DR stretch cluster.

Note

Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones.

Minimum starting node requirements [Technology Preview]

An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met.

For more information, see Resource requirements section in the Planning guide.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat