Chapter 3. System requirements
You must plan your Red Hat OpenStack Services on OpenShift (RHOSO) deployment to determine the system requirements for your environment.
3.1. Red Hat OpenShift Container Platform cluster requirements
The minimum requirements for the Red Hat OpenShift Container Platform (RHOCP) cluster that hosts your Red Hat OpenStack Services on OpenShift (RHOSO) control plane are as follows:
Hardware
- An operational, pre-provisioned 3-node RHOCP compact cluster, version 4.16.
Each node in the compact cluster must have the following resources:
- 64 GB RAM
- 16 CPU cores
120GB NVMe or SSD for the root disk plus 250 GB storage (NVMe or SSD is strongly recommended)
NoteThe images, volumes and root disks for the virtual machine instances running on the deployed environment are hosted on dedicated external storage nodes. However, the service logs, databases, and metadata are stored in a RHOCP Persistent Volume Claim (PVC). A minimum of 150 GB is required for testing.
2 physical NICs
NoteIn a 6-node cluster with 3 controllers and 3 workers, only the worker nodes require 2 physical NICs.
Persistent Volume Claim (PVC) storage on the cluster:
150 GB persistent volume (PV) pool for service logs, databases, file import conversion, and metadata.
Note- You must plan the size of the PV pool that you require for your RHOSO pods based on your RHOSO workload. For example, the Image service image conversion PVC should be large enough to host the largest image and that image after it is converted, as well as any other concurrent conversions. You must make similar considerations for the storage requirements if your RHOSO deployment uses the Object Storage service (swift).
- The PV pool is required for the Image service, however the actual images are stored on the Image service back end, such as Red Hat Ceph Storage or SAN.
- 5 GB of the available PVs must be backed by local SSDs for control plane services such as the Galera, OVN, and RabbitMQ databases.
Software
- The RHOCP environment supports Multus CNI.
The following Operators are installed on the RHOCP cluster:
-
The Kubernetes NMState Operator. This Operator must be started by creating an
nmstate
instance. For information, see Installing the Kubernetes NMState Operator in the RHOCP Networking guide. The MetalLB Operator. This Operator must be started by creating a
metallb
instance. For information, see Installing the MetalLB Operator in the RHOCP Networking guide.NoteWhen you start MetalLB with the MetalLB Operator, the Operator starts an instance of a
speaker
pod on each node in the cluster. When using an extended architecture such as 3 OCP controller/master and 3 OCP computes/workers, if your OCP controllers do not have access to thectlplane
andinternalapi
networks, you must limit thespeaker
pods to the OCP compute/worker nodes. For more information about speaker pods, see Limit speaker pods to specific nodes.- The cert-manager Operator. For information, see cert-manager Operator for Red Hat OpenShift in the RHOCP Security and compliance guide.
- The Cluster Observability Operator. For information, see Installing the Cluster Observability Operator.
- The Cluster Baremetal Operator (CBO). The CBO deploys the Bare Metal Operator (BMO) component, which is required to provision bare-metal nodes as part of the data plane deployment process. For more information on planning for bare-metal provisioning, see Planning provisioning for bare-metal data plane nodes.
-
The Kubernetes NMState Operator. This Operator must be started by creating an
The following tools are installed on the cluster workstation:
-
The
oc
command line tool. -
The
podman
command line tool.
-
The
- The RHOCP storage back end is configured.
-
The RHOCP storage class is defined, and has access to persistent volumes of type
ReadWriteOnce
. - For installer-provisioned infrastructure, you must prepare an operating system image for use with bare-metal provisioning. You can use the following image as the bare-metal image: https://catalog.redhat.com/software/containers/rhel9/rhel-guest-image/6197bdceb4dcabca7fe351d5?container-tabs=overview
Additional resources
3.2. Data plane node requirements
You can use pre-provisioned nodes or unprovisioned bare-metal nodes to create the data plane. The minimum requirements for data plane nodes are as follows:
Pre-provisioned nodes:
- RHEL 9.4.
-
Configured for SSH access with the SSH keys generated during data plane creation. The SSH user must either be
root
or have unrestricted and password-less sudo enabled. For more information, see Creating the data plane secrets in the Deploying Red Hat OpenStack Services on OpenShift guide. - Routable IP address on the control plane network to enable Ansible access through SSH.
Some network architectures may require the following networking capabilities:
- A dedicated NIC on RHOCP worker nodes for RHOSP isolated networks.
- Port switches with VLANs for the required isolated networks.
Consult with your RHOCP and network administrators about whether these are requirements in your deployment.
For information on the required isolated networks, see Default Red Hat OpenStack Platform networks in the Deploying Red Hat OpenStack Services on OpenShift guide.
3.3. Compute node requirements
Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes require bare metal systems that support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances that they host.
Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 does not support using QEMU architecture emulation.
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended that this processor has a minimum of 4 cores.
- Memory
A minimum of 6 GB of RAM for the host operating system, plus additional memory to accommodate for the following considerations:
- Add additional memory that you intend to make available to virtual machine instances.
- Add additional memory to run special features or additional resources on the host, such as additional kernel modules, virtual switches, monitoring solutions, and other additional background tasks.
- If you intend to use non-uniform memory access (NUMA), Red Hat recommends 8GB per CPU socket node or 16 GB per socket node if you have more than 256 GB of physical RAM.
- Configure at least 4 GB of swap space.
For more information about planning for Compute node memory configuration, see Configuring the Compute service for instance creation.
- Disk space
- A minimum of 50 GB of available disk space.
- Network Interface Cards
- A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
- Platform management
- Compute nodes that are installer-provisioned require a supported platform management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard. This interface is not required for pre-provisioned nodes.