Chapter 7. Infrastructure requirements
7.1. Platform requirements
Red Hat OpenShift Data Foundation 4.17 is supported only on OpenShift Container Platform version 4.17 and its next minor versions.
Bug fixes for previous version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy.
For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide.
For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker.
7.1.1. Amazon EC2
Supports internal Red Hat OpenShift Data Foundation clusters only.
An Internal cluster must meet both, storage device requirements and have a storage class that provides, EBS storage via the aws-ebs provisioner.
OpenShift Data Foundation supports gp2-csi
and gp3-csi
drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point (gp3-csi
). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi
is recommended to be used when deploying OpenShift Data Foundation.
If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2
or D3
.
7.1.2. Bare Metal
Supports internal clusters and consuming external clusters.
An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator.
7.1.3. VMware vSphere
Supports internal clusters and consuming external clusters.
Recommended versions:
- vSphere 7.0 or later
- vSphere 8.0 or later
For more details, see the VMware vSphere infrastructure requirements.
If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash.
Additionally, an Internal cluster must meet both the, storage device requirements and have a storage class providing either,
- vSAN or VMFS datastore via the vsphere-volume provisioner
- VMDK, RDM, or DirectPath storage devices via the Local Storage Operator.
7.1.4. Microsoft Azure
Supports internal Red Hat OpenShift Data Foundation clusters only.
An internal cluster must meet both, storage device requirements and have a storage class that provides, an azure disk via the azure-disk provisioner.
7.1.5. Google Cloud
Supports internal Red Hat OpenShift Data Foundation clusters only.
An internal cluster must meet both, storage device requirements and have a storage class that provides, a GCE Persistent Disk via the gce-pd provisioner.
7.1.6. Red Hat OpenStack Platform [Technology Preview]
Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters.
An internal cluster must meet both, storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner.
7.1.7. IBM Power
Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters.
An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator.
7.1.8. IBM Z and IBM® LinuxONE
Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86.
An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator.
7.1.9. ROSA with hosted control planes (HCP)
Supports internal Red Hat OpenShift Data Foundation clusters only.
An internal cluster must meet both, storage device requirements and have a storage class that provides AWS EBS volumes via gp3-csi
provisioner.
7.1.10. Any platform
Supports internal clusters and consuming external clusters.
An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator.
7.2. External mode requirement
7.2.1. Red Hat Ceph Storage
To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker.
-
Select Service Type as
ODF as Self-Managed Service
. - Select appropriate Version from the drop down.
- On the Versions tab, click the Supported RHCS Compatibility tab.
For instructions regarding how to install a RHCS cluster, see the installation guide.
7.2.2. IBM FlashSystem
To use IBM FlashSystem as a pluggable external storage on other providers, you need to first deploy it before you can deploy OpenShift Data Foundation, which would use the IBM FlashSystem storage class as a backing storage.
For the latest supported FlashSystem storage systems and versions, see IBM ODF FlashSystem driver documentation.
For instructions on how to deploy OpenShift Data Foundation, see Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage.
7.3. Resource requirements
Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules.
These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes.
Deployment Mode | Base services | Additional device Set |
---|---|---|
Internal |
|
|
External |
| Not applicable |
Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required.
For more information, see Chapter 6, Subscriptions and CPU units.
For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool.
CPU units
In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit.
- 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs.
- 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs.
- Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores).
Deployment Mode | Base services |
---|---|
Internal |
|
External |
|
Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required.
7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure
Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets.
All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules.
Deployment Mode | Base services | Additional device Set | IBM Z and IBM® LinuxONE minimum hardware requirements |
---|---|---|---|
Internal |
|
| 1 IFL |
External |
| Not applicable | Not applicable |
- CPU
- Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both.
- IFL (Integrated Facility for Linux)
- Is the physical core for IBM Z and IBM® LinuxONE.
Minimum system environment
- In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs .
7.3.2. Minimum deployment resource requirements
An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met.
These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes.
Deployment Mode | Base services |
---|---|
Internal |
|
If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment.
7.3.3. Compact deployment resource requirements
Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes.
These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes.
Deployment Mode | Base services | Additional device Set |
---|---|---|
Internal |
|
|
To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments.
7.3.4. Resource requirements for MCG only deployment
An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption.
Deployment Mode | Core | Database (DB) | Endpoint |
---|---|---|---|
Internal |
|
|
Note The defaut auto scale is between 1 - 2. |
7.3.5. Resource requirements for using Network File system
You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default.
The NFS volume can be accessed two ways:
- In-cluster: by an application pod inside of the Openshift cluster.
- Out of cluster: from outside of the Openshift cluster.
For more information about the NFS feature, see Creating exports using NFS
7.3.6. Resource requirements for performance profiles
OpenShift Data Foundation provides three performance profiles to enhance the performance of the clusters. You can choose one of these profiles based on your available resources and desired performance level during deployment or post deployment.
Performance profile | CPU | Memory |
---|---|---|
Lean | 24 | 72 GiB |
Balanced | 30 | 72 GiB |
Performance | 45 | 96 GiB |
Make sure to select the profiles based on the available free resources as you might already be running other workloads.
7.4. Pod placement rules
Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows:
-
Nodes are labeled with the
cluster.ocs.openshift.io/openshift-storage
key - Nodes are sorted into pseudo failure domains if none exist
- Components requiring high availability are spread across failure domains
- A storage device must be accessible in each failure domain
This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels.
For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments.
7.5. Storage device requirements
Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules.
Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components.
You can expand the storage capacity only in the increment of the capacity selected at the time of installation.
7.5.1. Dynamic storage devices
Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements.
7.5.2. Local storage devices
For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules.
Disk partitioning is not supported.
7.5.3. Capacity planning
Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content.
Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support.
The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices.
Storage Device size | Storage Devices per node | Total capacity | Usable storage capacity |
---|---|---|---|
0.5 TiB | 1 | 1.5 TiB | 0.5 TiB |
2 TiB | 1 | 6 TiB | 2 TiB |
4 TiB | 1 | 12 TiB | 4 TiB |
Storage Device size (D) | Storage Devices per node (M) | Total capacity (D * M * N) | Usable storage capacity (D*M*N/3) |
---|---|---|---|
0.5 TiB | 3 | 45 TiB | 15 TiB |
2 TiB | 6 | 360 TiB | 120 TiB |
4 TiB | 9 | 1080 TiB | 360 TiB |