Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 5. Planning your overcloud

download PDF

The following section contains some guidelines for planning various aspects of your Red Hat OpenStack Platform (RHOSP) environment. This includes defining node roles, planning your network topology, and storage.

Important

Do not rename your overcloud nodes after they have been deployed. Renaming a node after deployment creates issues with instance management.

5.1. Node roles

Director includes the following default node types to build your overcloud:

Controller

Provides key services for controlling your environment. This includes the dashboard (horizon), authentication (keystone), image storage (glance), networking (neutron), orchestration (heat), and high availability services. A Red Hat OpenStack Platform (RHOSP) environment requires three Controller nodes for a highly available production-level environment.

Note

Use environments with one Controller node only for testing purposes, not for production. Environments with two Controller nodes or more than three Controller nodes are not supported.

Compute
A physical server that acts as a hypervisor and contains the processing capabilities required to run virtual machines in the environment. A basic RHOSP environment requires at least one Compute node.
Ceph Storage
A host that provides Red Hat Ceph Storage. Additional Ceph Storage hosts scale into a cluster. This deployment role is optional.
Swift Storage
A host that provides external object storage to the OpenStack Object Storage (swift) service. This deployment role is optional.

The following table contains some examples of different overclouds and defines the node types for each scenario.

Table 5.1. Node deployment roles for scenarios
 ControllerComputeCeph StorageSwift StorageTotal

Small overcloud

3

1

-

-

4

Medium overcloud

3

3

-

-

6

Medium overcloud with additional object storage

3

3

-

3

9

Medium overcloud with Ceph Storage cluster

3

3

3

-

9

In addition, consider whether to split individual services into custom roles. For more information about the composable roles architecture, see Composable services and custom roles in the Customizing your Red Hat OpenStack Platform deployment guide.

Table 5.2. Node Deployment Roles for Proof of Concept Deployment
 UndercloudControllerComputeCeph StorageTotal

Proof of concept

1

1

1

1

4

Warning

The Red Hat OpenStack Platform maintains an operational Ceph Storage cluster during day-2 operations. Therefore, some day-2 operations, such as upgrades or minor updates of the Ceph Storage cluster, are not possible in deployments with fewer than three MONs or three storage nodes. If you use a single Controller node or a single Ceph Storage node, day-2 operations will fail.

5.2. Overcloud networks

It is important to plan the networking topology and subnets in your environment so that you can map roles and services to communicate with each other correctly. Red Hat OpenStack Platform (RHOSP) uses the Openstack Networking (neutron) service, which operates autonomously and manages software-based networks, static and floating IP addresses, and DHCP.

By default, director configures nodes to use the Provisioning / Control Plane for connectivity. However, it is possible to isolate network traffic into a series of composable networks, that you can customize and assign services.

In a typical RHOSP installation, the number of network types often exceeds the number of physical network links. To connect all the networks to the proper hosts, the overcloud uses VLAN tagging to deliver more than one network on each interface. Most of the networks are isolated subnets but some networks require a Layer 3 gateway to provide routing for Internet access or infrastructure network connectivity. If you use VLANs to isolate your network traffic types, you must use a switch that supports 802.1Q standards to provide tagged VLANs.

Note

You create project (tenant) networks using VLANs. You can create Geneve tunnels for special-use networks without consuming project VLANs. Red Hat recommends that you deploy a project network tunneled with Geneve, even if you intend to deploy your overcloud in neutron VLAN mode with tunneling disabled. If you deploy a project network tunneled with Geneve, you can still update your environment to use tunnel networks as utility networks or virtualization networks. It is possible to add Geneve capability to a deployment with a project VLAN, but it is not possible to add a project VLAN to an existing overcloud without causing disruption.

Director also includes a set of templates that you can use to configure NICs with isolated composable networks. The following configurations are the default configurations:

  • Single NIC configuration - One NIC for the Provisioning network on the native VLAN and tagged VLANs that use subnets for the different overcloud network types.
  • Bonded NIC configuration - One NIC for the Provisioning network on the native VLAN and two NICs in a bond for tagged VLANs for the different overcloud network types.
  • Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type.

You can also create your own templates to map a specific NIC configuration.

The following details are also important when you consider your network configuration:

  • During the overcloud creation, you refer to NICs using a single name across all overcloud machines. Ideally, you should use the same NIC on each overcloud node for each respective network to avoid confusion. For example, use the primary NIC for the Provisioning network and the secondary NIC for the OpenStack services.
  • Set all overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the External NIC and any other NICs on the system. Also ensure that the Provisioning NIC has PXE boot at the top of the boot order, ahead of hard disks and CD/DVD drives.
  • All overcloud bare metal systems require a supported power management interface, such as an Intelligent Platform Management Interface (IPMI), so that director can control the power management of each node.
  • Make a note of the following details for each overcloud system: the MAC address of the Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This information is useful later when you configure the overcloud nodes.
  • If an instance must be accessible from the external internet, you can allocate a floating IP address from a public network and associate the floating IP with an instance. The instance retains its private IP but network traffic uses NAT to traverse through to the floating IP address. Note that a floating IP address can be assigned only to a single instance rather than multiple private IP addresses. However, the floating IP address is reserved for use only by a single tenant, which means that the tenant can associate or disassociate the floating IP address with a particular instance as required. This configuration exposes your infrastructure to the external internet and you must follow suitable security practices.
  • To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond can be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges.
  • Red Hat recommends using DNS hostname resolution so that your overcloud nodes can connect to external services, such as the Red Hat Content Delivery Network and network time servers.
  • Red Hat recommends that the Provisioning interface, External interface, and any floating IP interfaces be left at the default MTU of 1500. Connectivity problems are likely to occur otherwise. This is because routers typically cannot forward jumbo frames across Layer 3 boundaries.

5.3. Overcloud storage

You can use Red Hat Ceph Storage nodes as the back end storage for your overcloud environment. You can configure your overcloud to use the Ceph nodes for the following types of storage:

Images
The Image service (glance) manages the images that are used for creating virtual machine instances. Images are immutable binary blobs. You can use the Image service to store images in a Ceph Block Device. For information about supported image formats, see The Image service (glance) in Creating and managing images.
Volumes
The Block Storage service (cinder) manages persistent storage volumes for instances. The Block Storage service volumes are block devices. You can use a volume to boot an instance, and you can attach volumes to running instances. You can use the Block Storage service to boot a virtual machine using a copy-on-write clone of an image.
Objects
The Ceph Object Gateway (RGW) provides the default overcloud object storage on the Ceph cluster when your overcloud storage back end is Red Hat Ceph Storage. If your overcloud does not have Red Hat Ceph Storage, then the overcloud uses the Object Storage service (swift) to provide object storage. You can dedicate overcloud nodes to the Object Storage service. This is useful in situations where you need to scale or replace Controller nodes in your overcloud environment but need to retain object storage outside of a high availability cluster.
File Systems
The Shared File Systems service (manila) manages shared file systems. You can use the Shared File Systems service to manage shares backed by a CephFS file system with data on the Ceph Storage nodes.
Instance disks
When you launch an instance, the instance disk is stored as a file in the instance directory of the hypervisor. The default file location is /var/lib/nova/instances.

For more information about Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.

5.3.1. Configuration considerations for overcloud storage

Consider the following issues when planning your storage configuration:

Instance security and performance
Using LVM on an instance that uses a back-end Block Storage volume causes issues with performance, volume visibility and availability, and data corruption. Use an LVM filter to mitigate issues. For more information, see Enabling LVM2 filtering on overcloud nodes in Configuring persistent storage, and the Red Hat Knowledgebase solution Using LVM on a cinder volume exposes the data to the compute host.
Local disk partition sizes

Consider the storage and retention requirements for your deployment to determine if the following default disk partition sizes meet your requirements:

PartitionDefault size

/

8GB

/tmp

1GB

/var/log

10GB

/var/log/audit

2GB

/home

1GB

/var

Node role dependent:

  • Object Storage nodes: 10% of remaining disk size.
  • Controller nodes: 90% of remaining disk size.
  • Non-Object Storage nodes: Allocated the remaining size of the disk after all other partitions are allocated.

/srv

On Object Storage nodes: Allocated the remaining size of the disk after all other partitions are allocated.

To change the allocated disk size for a partition, update the role_growvols_args extra Ansible variable in the ansible_playbooks definition in your overcloud-baremetal-deploy.yaml node definition file. For more information, see Configuring whole disk partitions for the Object Storage service.

5.4. Overcloud security

Your OpenStack Platform implementation is only as secure as your environment. Follow good security principles in your networking environment to ensure that you control network access properly:

  • Use network segmentation to mitigate network movement and isolate sensitive data. A flat network is much less secure.
  • Restrict services access and ports to a minimum.
  • Enforce proper firewall rules and password usage.
  • Ensure that SELinux is enabled.

For more information about securing your system, see the following Red Hat guides:

5.5. Overcloud high availability

To deploy a highly-available overcloud, director configures multiple Controller, Compute and Storage nodes to work together as a single cluster. In case of node failure, an automated fencing and re-spawning process is triggered based on the type of node that failed. For more information about overcloud high availability architecture and services, see Managing high availability services.

Note

Deploying a highly available overcloud without STONITH is not supported. You must configure a STONITH device for each node that is a part of the Pacemaker cluster in a highly available overcloud. For more information on STONITH and Pacemaker, see Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High Availability Clusters.

You can also configure high availability for Compute instances with director (Instance HA). This high availability mechanism automates evacuation and re-spawning of instances on Compute nodes in case of node failure. The requirements for Instance HA are the same as the general overcloud requirements, but you must perform a few additional steps to prepare your environment for the deployment. For more information about Instance HA and installation instructions, see the Configuring high availability for instances guide.

5.6. Controller node requirements

Controller nodes host the core services in a Red Hat OpenStack Platform environment, such as the Dashboard (horizon), the back-end database server, the Identity service (keystone) authentication, and high availability services.

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory

The minimum amount of memory is 32 GB. However, the amount of recommended memory depends on the number of vCPUs, which is based on the number of CPU cores multiplied by hyper-threading value. Use the following calculations to determine your RAM requirements:

  • Controller RAM minimum calculation:

    • Use 1.5 GB of memory for each vCPU. For example, a machine with 48 vCPUs should have 72 GB of RAM.
  • Controller RAM recommended calculation:

    • Use 3 GB of memory for each vCPU. For example, a machine with 48 vCPUs should have 144 GB of RAM

For more information about measuring memory requirements, see "Red Hat OpenStack Platform Hardware Requirements for Highly Available Controllers" on the Red Hat Customer Portal.

Disk Storage and layout

A minimum amount of 50 GB storage is required if the Object Storage service (swift) is not running on the Controller nodes. However, the Telemetry and Object Storage services are both installed on the Controllers, with both configured to use the root disk. These defaults are suitable for deploying small overclouds built on commodity hardware. These environments are typical of proof-of-concept and test environments. You can use these defaults to deploy overclouds with minimal planning, but they offer little in terms of workload capacity and performance.

In an enterprise environment, however, the defaults could cause a significant bottleneck because Telemetry accesses storage constantly. This results in heavy disk I/O usage, which severely impacts the performance of all other Controller services. In this type of environment, you must plan your overcloud and configure it accordingly.

Network Interface Cards
A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power management
Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard.

5.6.1. Constraints when using NUMA

The Compute service (nova) forces a strict memory affinity for all virtual machines (VMs) that have a non-uniform memory access (NUMA) topology. This means that NUMA VMs have memory affined to the same host NUMA node as its CPUs. Do not run NUMA and non-NUMA VMs on the same hosts. If a non-NUMA VM is already running a host, and a NUMA VM affined boots on that host, this might result in out of memory (OOM) events because the NUMA VM cannot access the host memory, and is limited to its NUMA node. To avoid OOM events, ensure that NUMA-aware memory tracking is enabled on all NUMA-affined instances. To do this, configure the hw:mem_page_size flavor extra spec.

5.7. Compute node requirements

Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes require bare metal systems that support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances that they host.

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended that this processor has a minimum of 4 cores.
Memory

A minimum of 6 GB of RAM for the host operating system, plus additional memory to accommodate for the following considerations:

  • Add additional memory that you intend to make available to virtual machine instances.
  • Add additional memory to run special features or additional resources on the host, such as additional kernel modules, virtual switches, monitoring solutions, and other additional background tasks.
  • If you intend to use non-uniform memory access (NUMA), Red Hat recommends 8GB per CPU socket node or 16 GB per socket node if you have more then 256 GB of physical RAM.
  • Configure at least 4 GB of swap space.
Disk space
A minimum of 50 GB of available disk space.
Network Interface Cards
A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power management
Each Compute node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard.

5.8. Red Hat Ceph Storage node requirements

There are additional node requirements using director to create a Ceph Storage cluster:

  • Hardware requirements including processor, memory, and network interface card selection and disk layout are available in the Red Hat Ceph Storage Hardware Guide.
  • Each Ceph Storage node requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality, on the motherboard of the server.
  • Each Ceph Storage node must have at least two disks. RHOSP director uses cephadm to deploy the Ceph Storage cluster. The cephadm functionality does not support installing Ceph OSD on the root disk of the node.

5.8.1. Red Hat Ceph Storage nodes and RHEL compatibility

RHOSP 17.1 is supported on RHEL 9.2. However, hosts that are mapped to the Red Hat Ceph Storage role update to the latest major RHEL release. Before upgrading, review the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations.

5.9. Object Storage node requirements

Object Storage nodes provide an object storage layer for the overcloud. The Object Storage proxy is installed on Controller nodes. The storage layer requires bare metal nodes with multiple disks on each node.

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
Memory requirements depend on the amount of storage space. Use at minimum 1 GB of memory for each 1 TB of hard disk space. For optimal performance, it is recommended to use 2 GB for each 1 TB of hard disk space, especially for workloads with files smaller than 100GB.
Disk space

Storage requirements depend on the capacity needed for the workload. It is recommended to use SSD drives to store the account and container data. The capacity ratio of account and container data to objects is approximately 1 per cent. For example, for every 100TB of hard drive capacity, provide 1TB of SSD capacity for account and container data.

However, this depends on the type of stored data. If you want to store mostly small objects, provide more SSD space. For large objects (videos, backups), use less SSD space.

Disk layout

The recommended node configuration requires a disk layout similar to the following example:

  • /dev/sda - The root disk. Director copies the main overcloud image to the disk.
  • /dev/sdb - Used for account data.
  • /dev/sdc - Used for container data.
  • /dev/sdd and onward - The object server disks. Use as many disks as necessary for your storage requirements.
Network Interface Cards
A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
Power management
Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard.

5.10. Overcloud repositories

You run Red Hat OpenStack Platform (RHOSP) 17.1 on Red Hat Enterprise Linux (RHEL) 9.2.

Note

If you synchronize repositories with Red Hat Satellite, you can enable specific versions of the Red Hat Enterprise Linux repositories. However, the repository remains the same despite the version you choose. For example, you can enable the 9.2 version of the BaseOS repository, but the repository name is still rhel-9-for-x86_64-baseos-eus-rpms despite the specific version you choose.

Warning

Any repositories except the ones specified here are not supported. Unless recommended, do not enable any other products or repositories except the ones listed in the following tables or else you might encounter package dependency issues. Do not enable Extra Packages for Enterprise Linux (EPEL).

Controller node repositories

The following table lists core repositories for Controller nodes in the overcloud.

NameRepositoryDescription of requirement

Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS)

rhel-9-for-x86_64-baseos-eus-rpms

Base operating system repository for x86_64 systems.

Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)

rhel-9-for-x86_64-appstream-eus-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS)

rhel-9-for-x86_64-highavailability-eus-rpms

High availability tools for Red Hat Enterprise Linux.

Red Hat OpenStack Platform for RHEL 9 x86_64 (RPMs)

openstack-17.1-for-rhel-9-x86_64-rpms

Core Red Hat OpenStack Platform repository.

Red Hat Fast Datapath for RHEL 9 (RPMS)

fast-datapath-for-rhel-9-x86_64-rpms

Provides Open vSwitch (OVS) packages for OpenStack Platform.

Red Hat Ceph Storage Tools 6 for RHEL 9 x86_64 (RPMs)

rhceph-6-tools-for-rhel-9-x86_64-rpms

Tools for Red Hat Ceph Storage 6 for Red Hat Enterprise Linux 9.

Compute and ComputeHCI node repositories

The following table lists core repositories for Compute and ComputeHCI nodes in the overcloud.

NameRepositoryDescription of requirement

Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS)

rhel-9-for-x86_64-baseos-eus-rpms

Base operating system repository for x86_64 systems.

Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)

rhel-9-for-x86_64-appstream-eus-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS)

rhel-9-for-x86_64-highavailability-eus-rpms

High availability tools for Red Hat Enterprise Linux.

Red Hat OpenStack Platform for RHEL 9 x86_64 (RPMs)

openstack-17.1-for-rhel-9-x86_64-rpms

Core Red Hat OpenStack Platform repository.

Red Hat Fast Datapath for RHEL 9 (RPMS)

fast-datapath-for-rhel-9-x86_64-rpms

Provides Open vSwitch (OVS) packages for OpenStack Platform.

Red Hat Ceph Storage Tools 6 for RHEL 9 x86_64 (RPMs)

rhceph-6-tools-for-rhel-9-x86_64-rpms

Tools for Red Hat Ceph Storage 6 for Red Hat Enterprise Linux 9.

Ceph Storage node repositories

The following table lists Ceph Storage related repositories for the overcloud.

NameRepositoryDescription of requirement

Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)

rhel-9-for-x86_64-baseos-rpms

Base operating system repository for x86_64 systems.

Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)

rhel-9-for-x86_64-appstream-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat OpenStack Platform Deployment Tools for RHEL 9 x86_64 (RPMs)

openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms

Packages to help director configure Ceph Storage nodes. This repository is included with standalone Ceph Storage subscriptions. If you use a combined OpenStack Platform and Ceph Storage subscription, use the openstack-17.1-for-rhel-9-x86_64-rpms repository.

Red Hat OpenStack Platform for RHEL 9 x86_64 (RPMs)

openstack-17.1-for-rhel-9-x86_64-rpms

Packages to help director configure Ceph Storage nodes. This repository is included with combined Red Hat OpenStack Platform and Red Hat Ceph Storage subscriptions. If you use a standalone Red Hat Ceph Storage subscription, use the openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms repository.

Red Hat Ceph Storage Tools 6 for RHEL 9 x86_64 (RPMs)

rhceph-6-tools-for-rhel-9-x86_64-rpms

Provides tools for nodes to communicate with the Ceph Storage cluster.

Red Hat Fast Datapath for RHEL 9 (RPMS)

fast-datapath-for-rhel-9-x86_64-rpms

Provides Open vSwitch (OVS) packages for OpenStack Platform. If you are using OVS on Ceph Storage nodes, add this repository to the network interface configuration (NIC) templates.

5.11. Node provisioning and configuration

You provision the overcloud nodes for your Red Hat OpenStack Platform (RHOSP) environment by using either the OpenStack Bare Metal (ironic) service, or an external tool. When your nodes are provisioned, you configure them by using director.

Provisioning with the OpenStack Bare Metal (ironic) service
Provisioning overcloud nodes by using the Bare Metal service is the standard provisioning method. For more information, see Provisioning bare metal overcloud nodes.
Provisioning with an external tool
You can use an external tool, such as Red Hat Satellite, to provision overcloud nodes. This is useful if you want to create an overcloud without power management control, use networks that have DHCP/PXE boot restrictions, or if you want to use nodes that have a custom partitioning layout that does not rely on the overcloud-hardened-uefi-full.qcow2 image. This provisioning method does not use the OpenStack Bare Metal service (ironic) for managing nodes. For more information, see Configuring a basic overcloud with pre-provisioned nodes.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.