Chapter 5. Planning your overcloud


The following section contains some guidelines for planning various aspects of your Red Hat OpenStack Platform environment. This includes defining node roles, planning your network topology, and storage.

5.1. Node roles

The director includes multiple default node types for building your overcloud. These node types are:

Controller

Provides key services for controlling your environment. This includes the dashboard (horizon), authentication (keystone), image storage (glance), networking (neutron), orchestration (heat), and high availability services. A Red Hat OpenStack Platform environment requires three Controller nodes for a highly available production-level environment.

Note

Environments with one node can only be used for testing purposes, not for production. Environments with two nodes or more than three nodes are not supported.

Compute
A physical server that acts as a hypervisor and contains the processing capabilities required for running virtual machines in the environment. A basic Red Hat OpenStack Platform environment requires at least one Compute node.
Ceph Storage
A host that provides Red Hat Ceph Storage. Additional Ceph Storage hosts scale into a cluster. This deployment role is optional.
Swift Storage
A host that provides external object storage the OpenStack Object Storage (swift) service. This deployment role is optional.

The following table contains some examples some examples of different overclouds and defines the node types for each scenario.

Table 5.1. Node Deployment Roles for Scenarios
 

Controller

Compute

Ceph Storage

Swift Storage

Total

Small overcloud

3

1

-

-

4

Medium overcloud

3

3

-

-

6

Medium overcloud with additional Object storage

3

3

-

3

9

Medium overcloud with Ceph Storage cluster

3

3

3

-

9

In addition, consider whether to split individual services into custom roles. For more information about the composable roles architecture, see "Composable Services and Custom Roles" in the Advanced Overcloud Customization guide.

5.2. Overcloud networks

It is important to plan your environment’s networking topology and subnets so that you can properly map roles and services to communicate with each other correctly. Red Hat OpenStack Platform uses the Openstack Networking (neutron) service, which operates autonomously and manages software-based networks, static and floating IP addresses, and DHCP.

By default, the director configures nodes to use the Provisioning / Control Plane for connectivity. However, it is possible to isolate network traffic into a series of composable networks, which you can customize and assign services.

In a typical Red Hat OpenStack Platform installation, the number of network types often exceeds the number of physical network links. In order to connect all the networks to the proper hosts, the overcloud uses VLAN tagging to deliver more than one network per interface. Most of the networks are isolated subnets but some networks require a Layer 3 gateway to provide routing for Internet access or infrastructure network connectivity. If using VLANs to isolate your network traffic types, use a switch that supports 802.1Q standards to provide tagged VLANs.

Note

It is recommended that you deploy a project network (tunneled with GRE or VXLAN) even if you intend to use a neutron VLAN mode (with tunneling disabled) at deployment time. This requires minor customization at deployment time and leaves the option available to use tunnel networks as utility networks or virtualization networks in the future. You still create Tenant networks using VLANs, but you can also create VXLAN tunnels for special-use networks without consuming tenant VLANs. It is possible to add VXLAN capability to a deployment with a Tenant VLAN, but it is not possible to add a Tenant VLAN to an existing overcloud without causing disruption.

The director also includes a set of templates to configure NICs with isolated composable networks. The following configurations are the default configurations:

  • Single NIC configuration - One NIC for the Provisioning network on the native VLAN and tagged VLANs that use subnets for the different overcloud network types.
  • Bonded NIC configuration - One NIC for the Provisioning network on the native VLAN and the two NICs in a bond for tagged VLANs for the different overcloud network types.
  • Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type.

You can also create your own templates to map a specific NIC configuration.

The following details are also important when considering your network configuration:

  • During the overcloud creation, you refer to NICs using a single name across all overcloud machines. Ideally, you should use the same NIC on each overcloud node for each respective network to avoid confusion. For example, use the primary NIC for the Provisioning network and the secondary NIC for the OpenStack services.
  • Set all overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the External NIC and any other NICs on the system. Also ensure that the Provisioning NIC has PXE boot at the top of the boot order, ahead of hard disks and CD/DVD drives.
  • All overcloud bare metal systems require a supported power management interface, such as an Intelligent Platform Management Interface (IPMI). This allows the director to control the power management of each node.
  • Make a note of the following details for each overcloud system: the MAC address of the Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This information will be useful later when setting up the overcloud nodes.
  • If an instance needs to be accessible from the external internet, you can allocate a floating IP address from a public network and associate it with an instance. The instance still retains its private IP but network traffic uses NAT to traverse through to the floating IP address. Note that a floating IP address can only be assigned to a single instance rather than multiple private IP addresses. However, the floating IP address is reserved only for use by a single tenant, allowing the tenant to associate or disassociate with a particular instance as required. This configuration exposes your infrastructure to the external internet. As a result, you might need to check that you are following suitable security practices.
  • To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond may be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges.
  • Red Hat recommends using DNS hostname resolution so that your overcloud nodes can connect to external services, such as the Red Hat Content Delivery Network and network time servers.
Note

You can virtualize the overcloud control plane if you are using Red Hat Virtualization (RHV). See Creating virtualized control planes for details.

5.3. Overcloud storage

Note

Using LVM on a guest instance that uses a back end cinder-volume of any driver or back-end type results in issues with performance, volume visibility and availability, and data corruption. Use an LVM filter to mitigate these issues. For more information, see section 2.1 Back Ends in the Storage Guide and KCS article 3213311, "Using LVM on a cinder volume exposes the data to the compute host."

The director includes different storage options for the overcloud environment:

Ceph Storage Nodes

The director creates a set of scalable storage nodes using Red Hat Ceph Storage. The overcloud uses these nodes for the following storage types:

  • Images - Glance manages images for VMs. Images are immutable. OpenStack treats images as binary blobs and downloads them accordingly. You can use glance to store images in a Ceph Block Device.
  • Volumes - Cinder volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using cinder services. You can use cinder to boot a VM using a copy-on-write clone of an image.
  • File Systems - Manila shares are backed by file systems. OpenStack users manage shares using manila services. You can use manila to manage shares backed by a CephFS file system with data on the Ceph Storage Nodes.
  • Guest Disks - Guest disks are guest operating system disks. By default, when you boot a virtual machine with nova, the virtual machine disk appears as a file on the filesystem of the hypervisor (usually under /var/lib/nova/instances/<uuid>/). Every virtual machine inside Ceph can be booted without using Cinder. As a result, you can perform maintenance operations easily with the live-migration process. Additionally, if your hypervisor dies it is also convenient to trigger nova evacuate and run the virtual machine elsewhere.

    Important

    For information about supported image formats, see the Image Service chapter in the Instances and Images Guide.

    See Red Hat Ceph Storage Architecture Guide for additional information.

Swift Storage Nodes
The director creates an external object storage node. This is useful in situations where you need to scale or replace controller nodes in your overcloud environment but need to retain object storage outside of a high availability cluster.

5.4. Overcloud security

Your OpenStack Platform implementation is only as secure as its environment. Follow good security principles in your networking environment to ensure that network access is properly controlled:

  • Use network segmentation to mitigate network movement and isolate sensitive data. A flat network is much less secure.
  • Restrict services access and ports to a minimum.
  • Enforce proper firewall rules and password usage.
  • Ensure that SELinux is enabled.

For details about securing your system, see the following Red Hat guides:

5.5. Overcloud high availability

To deploy a highly-available overcloud, the director configures multiple Controller, Compute and Storage nodes to work together as a single cluster. In case of node failure, an automated fencing and re-spawning process is triggered based on the type of node that failed. For information about overcloud high availability architecture and services, see High Availability Deployment and Usage.

You can also configure high availability for Compute instances with the director (Instance HA). This high availability mechanism automates evacuation and re-spawning of instances on Compute nodes in case of node failure. The requirements for Instance HA are the same as the general overcloud requirements, but you must perform a few additional steps to prepare your environment for the deployment. For information about how Instance HA works and installation instructions, see the High Availability for Compute Instances guide.

5.6. Controller node requirements

Controller nodes host the core services in a Red Hat OpenStack Platform environment, such as the Horizon dashboard, the back-end database server, Keystone authentication, and High Availability services.

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory

The minimum amount of memory is 32 GB. However, the amount of recommended memory depends on the number of vCPUs (which is based on CPU cores multiplied by hyper-threading value). Use the following calculations to determine your RAM requirements:

  • Controller RAM minimum calculation:

    • Use 1.5 GB of memory per vCPU. For example, a machine with 48 vCPUs should have 72 GB of RAM.
  • Controller RAM recommended calculation:

    • Use 3 GB of memory per vCPU. For example, a machine with 48 vCPUs should have 144 GB of RAM

For more information about measuring memory requirements, see "Red Hat OpenStack Platform Hardware Requirements for Highly Available Controllers" on the Red Hat Customer Portal.

Disk Storage and Layout

A minimum amount of 40 GB storage is required, if the Object Storage service (swift) is not running on the controller nodes. However, the Telemetry (gnocchi) and Object Storage services are both installed on the Controller, with both configured to use the root disk. These defaults are suitable for deploying small overclouds built on commodity hardware. These environments are typical of proof-of-concept and test environments. These defaults also allow the deployment of overclouds with minimal planning but offer little in terms of workload capacity and performance.

In an enterprise environment, however, this could cause a significant bottleneck, as Telemetry accesses storage constantly. This results in heavy disk I/O usage, which severely impacts the performance of all other Controller services. In this type of environment, you must plan your overcloud and configure it accordingly.

Red Hat provides several configuration recommendations for both Telemetry and Object Storage. See Deployment Recommendations for Specific Red Hat OpenStack Platform Services for details.

Network Interface Cards
A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power Management
Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server’s motherboard.
Virtualization Support
Red Hat only supports virtualized controller nodes on Red Hat Virtualization platforms. See Virtualized control planes for details.

5.7. Compute node requirements

Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes must support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances they host.

Processor
  • 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended this processor has a minimum of 4 cores.
  • IBM POWER 8 processor.
Memory
A minimum of 6 GB of RAM. Add additional RAM to this requirement based on the amount of memory that you intend to make available to virtual machine instances.
Disk Space
A minimum of 40 GB of available disk space.
Network Interface Cards
A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power Management
Each Compute node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server’s motherboard.

5.8. Ceph Storage node requirements

Ceph Storage nodes are responsible for providing object storage in a Red Hat OpenStack Platform environment.

Placement Groups
Ceph uses Placement Groups to facilitate dynamic and efficient object tracking at scale. In the case of OSD failure or cluster rebalancing, Ceph can move or replicate a placement group and its contents, which means a Ceph cluster can re-balance and recover efficiently. The default Placement Group count that director creates is not always optimal so it is important to calculate the correct Placement Group count according to your requirements. You can use the Placement Group calculator to calculate the correct count: Placement Groups (PGs) per Pool Calculator
Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
Red Hat typically recommends a baseline of 16 GB of RAM per OSD host, with an additional 2 GB of RAM per OSD daemon.
Disk Layout

Sizing is dependent on your storage requirements. Red Hat recommends that your Ceph Storage node configuration includes three or more disks in a layout similar to the following example:

  • /dev/sda - The root disk. The director copies the main overcloud image to the disk. Ensure that the disk has a minimum of 40 GB of available disk space.
  • /dev/sdb - The journal disk. This disk divides into partitions for Ceph OSD journals. For example, /dev/sdb1, /dev/sdb2, and /dev/sdb3. The journal disk is usually a solid state drive (SSD) to aid with system performance.
  • /dev/sdc and onward - The OSD disks. Use as many disks as necessary for your storage requirements.

    Note

    Red Hat OpenStack Platform director uses ceph-ansible, which does not support installing the OSD on the root disk of Ceph Storage nodes. This means you need at least two disks for a supported Ceph Storage node.

Network Interface Cards
A minimum of one 1 Gbps Network Interface Cards, although Red Hat recommends that you use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. Red Hat recommends that you use a 10 Gbps interface for storage node, especially if you want to create an OpenStack Platform environment that serves a high volume of traffic.
Power Management
Each Controller node requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality on the motherboard of the server.

See the Deploying an Overcloud with Containerized Red Hat Ceph guide for more information about installing an overcloud with a Ceph Storage cluster.

5.9. Object Storage node requirements

Object Storage nodes provides an object storage layer for the overcloud. The Object Storage proxy is installed on Controller nodes. The storage layer requires bare metal nodes with multiple number of disks per node.

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
Memory requirements depend on the amount of storage space. Ideally, use at minimum 1 GB of memory per 1 TB of hard disk space. For optimal performance, it is recommended to use 2 GB per 1 TB of hard disk space, especially for workloads with files smaller than 100GB.
Disk Space

Storage requirements depends on the capacity needed for the workload. It is recommended to use SSD drives to store the account and container data. The capacity ratio of account and container data to objects is approximately 1 per cent. For example, for every 100TB of hard drive capacity, provide 1TB of SSD capacity for account and container data.

However, this depends on the type of stored data. If storing mostly small objects, provide more SSD space. For large objects (videos, backups), use less SSD space.

Disk Layout

The recommended node configuration requires a disk layout similar to the following example:

  • /dev/sda - The root disk. The director copies the main overcloud image to the disk.
  • /dev/sdb - Used for account data.
  • /dev/sdc - Used for container data.
  • /dev/sdd and onward - The object server disks. Use as many disks as necessary for your storage requirements.
Network Interface Cards
A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power Management
Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server’s motherboard.

5.10. Overcloud repositories

You must enable the following repositories to install and configure the overcloud.

Core repositories

The following table lists core repositories for installing the overcloud.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)

rhel-8-for-x86_64-baseos-rpms

Base operating system repository for x86_64 systems.

Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)

rhel-8-for-x86_64-appstream-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs)

rhel-8-for-x86_64-highavailability-rpms

High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability.

Red Hat Ansible Engine 2.8 for RHEL 8 x86_64 (RPMs)

ansible-2.8-for-rhel-8-x86_64-rpms

Ansible Engine for Red Hat Enterprise Linux. Used to provide the latest version of Ansible.

Advanced Virtualization for RHEL 8 x86_64 (RPMs)

advanced-virt-for-rhel-8-x86_64-rpms

Provides virtualization packages for OpenStack Platform.

Red Hat Satellite Tools for RHEL 8 Server RPMs x86_64

satellite-tools-6.5-for-rhel-8-x86_64-rpms

Tools for managing hosts with Red Hat Satellite 6.

Red Hat OpenStack Platform 15 for RHEL 8 (RPMs)

openstack-15-for-rhel-8-x86_64-rpms

Core Red Hat OpenStack Platform repository.

Red Hat Fast Datapath for RHEL 8 (RPMS)

fast-datapath-for-rhel-8-x86_64-rpms

Provides Open vSwitch (OVS) packages for OpenStack Platform.

Real Time repositories

The following table lists repositories for Real Time Compute (RTC) functionality.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux 8 for x86_64 - Real Time (RPMs)

rhel-8-for-x86_64-rt-rpms

Repository for Real Time KVM (RT-KVM). Contains packages to enable the real time kernel. This repository should be enabled for all Compute nodes targeted for RT-KVM. NOTE: You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository.

Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV (RPMs)

rhel-8-for-x86_64-nfv-rpms

Repository for Real Time KVM (RT-KVM) for NFV. Contains packages to enable the real time kernel. This repository should be enabled for all NFV Compute nodes targeted for RT-KVM. NOTE: You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository.

IBM POWER repositories

The following table lists repositories for Openstack Platform on POWER PC architecture. Use these repositories in place of equivalents in the Core repositories.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux for IBM Power, little endian - BaseOS (RPMs)

rhel-8-for-ppc64le-baseos-rpms

Base operating system repository for ppc64le systems.

Red Hat Enterprise Linux 8 for IBM Power, little endian - AppStream (RPMs)

rhel-8-for-ppc64le-appstream-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 8 for IBM Power, little endian - High Availability (RPMs)

rhel-8-for-ppc64le-highavailability-rpms

High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability.

Red Hat Ansible Engine 2.8 for RHEL 8 IBM Power, little endian (RPMs)

ansible-2.8-for-rhel-8-ppc64le-rpms

Ansible Engine for Red Hat Enterprise Linux. Used to provide the latest version of Ansible.

Red Hat OpenStack Platform 15 for RHEL 8 (RPMs)

openstack-15-for-rhel-8-ppc64le-rpms

Core Red Hat OpenStack Platform repository for ppc64le systems.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.