2.4. Overcloud Requirements
The following sections detail the requirements for individual systems and nodes in the Overcloud installation.
Note
Booting an overcloud node from the SAN (FC-AL, FCoE, iSCSI) is not yet supported.
2.4.1. Compute Node Requirements
Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes must support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances they host.
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended this processor has a minimum of 4 cores.
- Memory
- A minimum of 6 GB of RAM.Add additional RAM to this requirement based on the amount of memory that you intend to make available to virtual machine instances.
- Disk Space
- A minimum of 40 GB of available disk space.
- Network Interface Cards
- A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
- Intelligent Platform Management Interface (IPMI)
- Each Compute node requires IPMI functionality on the server's motherboard.
2.4.2. Controller Node Requirements
Controller nodes are responsible for hosting the core services in a RHEL OpenStack Platform environment, such as the Horizon dashboard, the back-end database server, Keystone authentication, and High Availability services.
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
- Memory
- A minimum of 32 GB of RAM for each Controller node. For optimal performance, it is recommended to use 64 GB for each Controller node.
Important
The amount of recommended memory depends on the number of CPU cores. A greater number of CPU cores requires more memory. For more information on measuring memory requirements, see "Red Hat OpenStack Platform Hardware Requirements for Highly Available Controllers" on the Red Hat Customer Portal. - Disk Space
- A minimum of 40 GB of available disk space.
- Network Interface Cards
- A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
- Intelligent Platform Management Interface (IPMI)
- Each Controller node requires IPMI functionality on the server's motherboard.
2.4.3. Ceph Storage Node Requirements
Ceph Storage nodes are responsible for providing object storage in a RHEL OpenStack Platform environment.
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
- Memory
- Memory requirements depend on the amount of storage space. Ideally, use at minimum 1 GB of memory per 1 TB of hard disk space.
- Disk Space
- Storage requirements depends on the amount of memory. Ideally, use at minimum 1 GB of memory per 1 TB of hard disk space.
- Disk Layout
- The recommended Red Hat Ceph Storage node configuration requires a disk layout similar to the following:
/dev/sda
- The root disk. The director copies the main Overcloud image to the disk./dev/sdb
- The journal disk. This disk divides into partitions for Ceph OSD journals. For example,/dev/sdb1
,/dev/sdb2
,/dev/sdb3
, and onward. The journal disk is usually a solid state drive (SSD) to aid with system performance./dev/sdc
and onward - The OSD disks. Use as many disks as necessary for your storage requirements.
This guide contains the necessary instructions to map your Ceph Storage disks into the director. - Network Interface Cards
- A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. It is recommended to use a 10 Gbps interface for storage node, especially if creating an OpenStack Platform environment that serves a high volume of traffic.
- Intelligent Platform Management Interface (IPMI)
- Each Ceph node requires IPMI functionality on the server's motherboard.
Important
The director does not create partitions on the journal disk. You must manually create these journal partitions before the Director can deploy the Ceph Storage nodes.
The Ceph Storage OSDs and journals partitions require GPT disk labels, which you also configure prior to customization. For example, use the following command on the potential Ceph Storage host to create a GPT disk label for a disk or partition:
# parted [device] mklabel gpt