Chapter 3. Infrastructure and system requirements
You must plan your Red Hat OpenStack Services on OpenShift (RHOSO) deployment to determine the infrastructure and system requirements for your environment.
3.1. Red Hat OpenShift Container Platform cluster requirements Copy linkLink copied to clipboard!
There are minimum hardware, network, and software requirements for the Red Hat OpenShift Container Platform (RHOCP) cluster that hosts your Red Hat OpenStack Services on OpenShift (RHOSO) control plane.
3.1.1. Minimum RHOCP hardware requirements Copy linkLink copied to clipboard!
You can host your Red Hat OpenStack Services on OpenShift (RHOSO) control plane on a compact or a non-compact Red Hat OpenShift Container Platform (RHOCP) cluster. The minimum hardware requirements for the RHOCP cluster that hosts your RHOSO control plane are as follows:
An operational, pre-provisioned 3-node bare metal RHOCP cluster, version 4.18.
NoteIf you are using a multi-node RHOCP cluster with dedicated control plane and worker nodes, you must have 3 dedicated RHOCP control plane nodes and 3 dedicated worker nodes for high availability.
Each worker node in the cluster must have the following resources:
- 64 GB RAM
- 16 CPU cores
120GB NVMe or SSD for the root disk plus 250 GB storage (NVMe or SSD is strongly recommended)
NoteThe images, volumes and root disks for the virtual machine instances running on the deployed environment are hosted on dedicated external storage nodes. However, the service logs, databases, and metadata are stored in a RHOCP Persistent Volume Claim (PVC). A minimum of 150 GB is required for testing.
2 physical NICs for basic control
NoteIn a 6-node cluster with 3 controllers and 3 workers, only the worker nodes require 2 physical NICs.
- (Optional) Additional dedicated NICs for OVN external gateways on the control plane. These are required if you plan to place OVN gateways on the control plane.
- The RHEL system timezone and the date and time of the system firmware (UEFI/BIOS) clock for each cluster node must be UTC.
- Each control plane node in the cluster must have the resources specified in Minimum resource requirements for cluster installation in the RHOCP Installing on any platform guide.
Persistent Volume Claim (PVC) storage on the cluster:
150 GB persistent volume (PV) pool for service logs, databases, file import conversion, and metadata.
Note- You must plan the size of the PV pool that you require for your RHOSO pods based on your RHOSO workload. For example, the Image service image conversion PVC should be large enough to host the largest image and that image after it is converted, as well as any other concurrent conversions. You must make similar considerations for the storage requirements if your RHOSO deployment uses the Object Storage service (swift).
- The PV pool is required for the Image service, however the actual images are stored on the Image service back end, such as Red Hat Ceph Storage or SAN.
- 5 GB of the available PVs must be backed by local SSDs for control plane services such as the Galera, OVN, and RabbitMQ databases.
3.1.2. RHOCP network requirements Copy linkLink copied to clipboard!
The minimum network requirements for the Red Hat OpenShift Container Platform (RHOCP) cluster that hosts your Red Hat OpenStack Services on OpenShift (RHOSO) control plane are as follows:
- If you are using virtual media boot to provision bare-metal data plane nodes and the nodes are not connected to a provisioning network or to the RHOCP machine network, you must configure a route for the Baseboard Management Controller (BMC) and the node to reach the RHOCP machine network. The machine network is the network used by RHOCP cluster nodes to communicate with each other.
- Synchronize all nodes in your RHOCP cluster with one or more Network Time Protocol (NTP) servers. Clock synchronization between RHOCP nodes and data plane nodes ensures accurate timing across your cloud environment for reliable services, workloads, and timestamps.
The following ports must be open between cluster nodes:
Expand Port Description 67,68When using a provisioning network, cluster nodes access the
dnsmasqDHCP server over their provisioning network interfaces using ports67and68.69When using a provisioning network, cluster nodes communicate with the TFTP server on port
69using their provisioning network interfaces. The TFTP server runs on the bootstrap VM. The bootstrap VM runs on the provisioner node.80When not using the image caching option or when using virtual media, the provisioner node must have port
80open on thebaremetalmachine network interface to stream the Red Hat Enterprise Linux CoreOS (RHCOS) image from the provisioner node to the cluster nodes.123The cluster nodes must access the NTP server on port
123using thebaremetalmachine network.5050The Ironic Inspector API runs on the control plane nodes and listens on port
5050. The Inspector API is responsible for hardware introspection, which collects information about the hardware characteristics of the bare-metal nodes.5051Port
5050uses port5051as a proxy.6180When deploying with virtual media and not using TLS, the provisioner node and the control plane nodes must have port
6180open on thebaremetalmachine network interface so that the baseboard management controller (BMC) of the worker nodes can access the RHCOS image. Starting with OpenShift Container Platform 4.13, the default HTTP port is6180.6183When deploying with virtual media and using TLS, the provisioner node and the control plane nodes must have port
6183open on thebaremetalmachine network interface so that the BMC of the worker nodes can access the RHCOS image.6190-6220The
OpenStackProvisionServercustom resource (CR) is required to install RHEL. If there is a firewall between your RHOCP machine network and the RHOSO control plane, you must open ports6190-6220. TheOpenStackProvisionServerCR is automatically created by default during the installation and deployment of your Red Hat OpenStack on OpenShift (RHOSO) environment and it uses the port range6190-6220. You can create a customOpenStackProvisionServerCR to limit the ports that are opened.6385The Ironic API server runs initially on the bootstrap VM and later on the control plane nodes and listens on port
6385. The Ironic API allows clients to interact with Ironic for bare-metal node provisioning and management, including operations such as enrolling new nodes, managing their power state, deploying images, and cleaning the hardware.6388Port
6385uses port6388as a proxy.8080When using image caching without TLS, port
8080must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes.8083When using the image caching option with TLS, port
8083must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes.9999By default, the Ironic Python Agent (IPA) listens on TCP port
9999for API calls from the Ironic conductor service. Communication between the bare-metal node where IPA is running and the Ironic conductor service uses this port.Some network architectures may require the following networking capabilities:
- A dedicated NIC on RHOCP worker nodes for RHOSO isolated networks.
- Port switches with VLANs for the required isolated networks.
Consult with your RHOCP and network administrators about whether these are requirements in your deployment. For information on the required isolated networks, see Default Red Hat OpenStack Platform networks in the Deploying Red Hat OpenStack Services on OpenShift guide.
3.1.3. RHOCP software requirements Copy linkLink copied to clipboard!
The minimum software requirements for the Red Hat OpenShift Container Platform (RHOCP) cluster that hosts your Red Hat OpenStack Services on OpenShift (RHOSO) control plane are as follows:
- The RHOCP environment supports Multus CNI.
The following Operators are installed on the RHOCP cluster:
-
The Kubernetes NMState Operator. This Operator must be started by creating an
nmstateinstance. For information, see Installing the Kubernetes NMState Operator in the RHOCP Networking guide. The MetalLB Operator. This Operator must be started by creating a
metallbinstance. For information, see Starting MetalLB on your cluster in the RHOCP Networking guide.NoteWhen you start MetalLB with the MetalLB Operator, the Operator starts an instance of a
speakerpod on each node in the cluster. When using an extended architecture such as 3 RHOCP masters and 3 RHOCP workers, if your RHOCP masters do not have access to thectlplaneandinternalapinetworks, you must limit thespeakerpods to the RHOCP worker nodes. For more information about speaker pods, see Limit speaker pods to specific nodes.- The cert-manager Operator. For information, see cert-manager Operator for Red Hat OpenShift in the RHOCP Security and compliance guide.
- The Cluster Observability Operator. For information, see Installing the Cluster Observability Operator.
-
The Kubernetes NMState Operator. This Operator must be started by creating an
- The Cluster Baremetal Operator (CBO) is configured for provisioning. The CBO deploys the Bare Metal Operator (BMO) component, which is required to provision bare-metal nodes as part of the data plane deployment process. For more information on planning for bare-metal provisioning, see Planning provisioning for bare-metal data plane nodes.
The following tools are installed on the cluster workstation:
-
The
occommand line tool. -
The
podmancommand line tool.
-
The
- The RHOCP storage back end is configured.
The RHOCP storage class is defined, and has access to persistent volumes of type
ReadWriteOnce.NoteIf you use Logical Volume Manager (LVM) storage, which only provides local volumes, in the event of a node failure the attached volume is not mounted on a new node because it is already assigned to the failed node. This prevents the SNR Operator from automatically rescheduling pods with LVMS PVCs. Therefore, if you use LVMs for storage, you must detach volumes after non-graceful node shutdown. For more information, see Detach volumes after non-graceful node shutdown.
3.2. Data plane node requirements Copy linkLink copied to clipboard!
You can use pre-provisioned nodes or unprovisioned bare-metal nodes to create the data plane. The minimum requirements for data plane nodes are as follows:
Pre-provisioned nodes:
- RHEL 9.4.
-
Configured for SSH access with the SSH keys generated during data plane creation. The SSH user must either be
rootor have unrestricted and password-less sudo enabled. For more information, see Creating the data plane secrets in the Deploying Red Hat OpenStack Services on OpenShift guide. - Routable IP address on the control plane network to enable Ansible access through SSH.
Unprovisioned nodes:
- The RHEL system timezone and the date and time of the system firmware (UEFI/BIOS) clock for each unprovisioned, bare-metal data plane node must be UTC.
3.3. Compute node requirements Copy linkLink copied to clipboard!
Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes require bare metal systems that support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances that they host.
Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 does not support using QEMU architecture emulation.
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. The processor has a minimum of 4 cores.
- Memory
A minimum of 6 GB of RAM for the host operating system, plus additional memory to accommodate for the following considerations:
- Add additional memory that you intend to make available to virtual machine instances.
- Add additional memory to run special features or additional resources on the host, such as additional kernel modules, virtual switches, monitoring solutions, and other additional background tasks.
- If you intend to use non-uniform memory access (NUMA), you should designate 8GB per CPU socket node or 16 GB per socket node if you have more than 256 GB of physical RAM.
- Configure at least 4 GB of swap space.
For more information about planning for Compute node memory configuration, see Configuring the Compute service for instance creation.
- Disk space
- A minimum of 50 GB of available disk space.
- Network Interface Cards
- A minimum of one 1 Gbps Network Interface Cards for testing, and a minimum of two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
- Platform management
- Compute nodes that are installer-provisioned require a supported platform management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard. This interface is not required for pre-provisioned nodes.