搜索

此内容没有您所选择的语言版本。

Chapter 3. System requirements

download PDF

You must plan your Red Hat OpenStack Services on OpenShift (RHOSO) deployment to determine the system requirements for your environment.

3.1. Red Hat OpenShift Container Platform cluster requirements

The minimum requirements for the Red Hat OpenShift Container Platform (RHOCP) cluster that hosts your Red Hat OpenStack Services on OpenShift (RHOSO) control plane are as follows:

Hardware

  • An operational, pre-provisioned 3-node RHOCP compact cluster, version 4.15 or later.
  • Each node in the compact cluster must have the following resources:

    • 32 GB RAM
    • 8+ CPUs
    • 250 GB storage

      Note

      The images, volumes and root disks for the virtual machine instances running on the deployed RHOSP environment are hosted on dedicated external storage nodes. However, the RHOSP service logs, databases, and metadata are stored in a RHOCP Persistent Volume Claim (PVC). A minimum of 150 GB is required for testing.

    • 2 physical NICs

      Note

      In a 6-node cluster with 3 controllers and 3 workers, only the worker nodes require 2 physical NICs.

  • Persistent Volume Claim (PVC) storage on the cluster:

    • 150 GB persistent volume (PV) pool for service logs, databases, file import conversion, and metadata.

      Note
      • You must plan the size of the PV pool that you require for your RHOSO pods based on your RHOSO workload. For example, the Image service image conversion PVC should be large enough to host the largest image and that image after it is converted, as well as any other concurrent conversions. You must make similar considerations for the storage requirements if your RHOSO deployment uses the Object Storage service (swift).
      • The PV pool is required for the Image service, however the actual images are stored on the Image service back end, such as Red Hat Ceph Storage or SAN.
    • 5 GB of the available PVs must be backed by local SSDs for control plane services such as the Galera, OVN, and RabbitMQ databases.

Software

  • The RHOCP environment supports Multus CNI.
  • The following Operators are installed on the RHOCP cluster:

    • The Kubernetes NMState Operator. This Operator must be started by creating an nmstate instance.
    • The MetalLB Operator. This Operator must be started by creating a metallb instance.

      Note

      When you start MetalLB with the MetalLB Operator, the Operator starts an instance of a speaker pod on each node in the cluster. When using an extended architecture such as 3 OCP controller/master and 3 OCP computes/workers, if your OCP controllers do not have access to the ctlplane and internalapi networks, you must limit the speaker pods to the OCP compute/worker nodes. For more information about on this topic, see Limit speaker pods to specific nodes.

    • The cert-manager Operator.
    • The Bare Metal Operator (BMO).
  • The following tools are installed on the cluster workstation:

    • The oc command line tool.
    • The podman command line tool.
  • Access to a private Red Hat Quay Container Registry account, https://quay.io/.
  • Access to a private repository in your registry. RHOSO code cannot be located on a public repository.
  • The RHOCP storage backend is configured.
  • The RHOCP storage class is defined, and has access to persistent volumes of type ReadWriteOnce and ReadWriteMany.
  • For installer-provisioned infrastructure, you must prepare an operating system image for use with bare-metal provisioning. You can use the following image as the bare-metal image: https://catalog.redhat.com/software/containers/rhel9/rhel-guest-image/6197bdceb4dcabca7fe351d5?container-tabs=overview

Additional resources

3.2. Data plane node requirements

You can use pre-provisioned nodes or unprovisioned bare-metal nodes to create the data plane. The minimum requirements for data plane nodes are as follows:

  • Pre-provisioned nodes:

    • RHEL 9.4.
    • Configured for SSH access with the SSH keys generated during data plane creation. The SSH user must either be root or have unrestricted and password-less sudo enabled. For more information, see Creating the SSH key secrets in the Deploying Red Hat OpenStack Services on OpenShift guide.
    • Routable IP address on the control plane network to enable Ansible access through SSH.
  • A dedicated NIC on RHOCP worker nodes for RHOSP isolated networks.
  • Port switches with VLANs for the required isolated networks. For information on the required isolated networks, see Default Red Hat OpenStack Platform networks in the Deploying Red Hat OpenStack Services on OpenShift guide.

3.3. Compute node requirements

Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes require bare metal systems that support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances that they host.

Note

Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 does not support using QEMU architecture emulation.

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended that this processor has a minimum of 4 cores.
Memory

A minimum of 6 GB of RAM for the host operating system, plus additional memory to accommodate for the following considerations:

  • Add additional memory that you intend to make available to virtual machine instances.
  • Add additional memory to run special features or additional resources on the host, such as additional kernel modules, virtual switches, monitoring solutions, and other additional background tasks.
  • If you intend to use non-uniform memory access (NUMA), Red Hat recommends 8GB per CPU socket node or 16 GB per socket node if you have more than 256 GB of physical RAM.
  • Configure at least 4 GB of swap space.
Disk space
A minimum of 50 GB of available disk space.
Network Interface Cards
A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power management
Each Compute node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard.
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.