Chapter 2. Top new features
This section provides an overview of the top new features in this release of Red Hat OpenStack Platform.
2.1. Bare Metal Service
This section outlines the top new features for the Bare Metal (ironic) service.
- Provision hardware before deploying the overcloud
-
In Red Hat OpenStack Platform 17.0, you must provision the bare metal nodes and the physical networks resources for the overcloud before deploying the overcloud. The
openstack overcloud deploy
command no longer provisions the hardware. For more information, see Provisioning and deploying your overcloud.
- New network definition file format
- In Red Hat OpenStack Platform 17.0, you configure your network definition files by using ansible jinja2 templates instead of heat templates. For more information, see Configuring overcloud networking.
- Whole disk images are the default overcloud image
The default
overcloud-full
flat partition images have been updated toovercloud-hardened-uefi-full
whole disk images. The whole disk image is a single compressed qcow2 image that contains the following elements:- A partition layout containing UEFI boot, legacy boot, and a root partition.
The root partition contains a single lvm group with logical volumes of different sizes that are mounted at
/
,/tmp
,/var
,/var/log
, and so on.When you deploy a whole-disk image, ironic-python-agent copies the whole image to the disk without any bootloader or partition changes.
- UEFI Boot by default
- The default boot mode of bare metal nodes is now UEFI boot, because the Legacy BIOS boot feature is unavailable in new hardware.
2.2. Block Storage
This section outlines the top new features for the Block Storage (cinder) service.
- Support for automating multipath deployments
- You can specify the location of your multipath configuration file for your overcloud deployment.
- Project-specific default volume types
For complex deployments, project administrators can define a default volume type for each project (tenant).
If you create a volume and do not specify a volume type, then Block Storage uses the default volume type. You can use the Block Storage (cinder) configuration file to define the general default volume type that applies to all your projects (tenants). But if your deployment uses project-specific volume types, ensure that you define default volume types for each project. In this case, Block Storage uses the project-specific volume type instead of the general default volume type. For more information, see Defining a project-specific default volume type.
2.3. Ceph Storage
This section outlines the top new features for Ceph Storage.
- Greater security for Ceph client Shared Files Systems service (manila) permissions
-
The Shared File Systems service CephFS drivers (native CephFS and CephFS through NFS) now interact with Ceph clusters through the Ceph Manager
Volumes
interface. The Ceph client user configured for the Shared Files Systems service no longer needs to be as permissive. This feature makes Ceph client user permissions for the Shared Files Systems service more secure. - Ceph Object Gateway (RGW) replaces Object Storage service (swift)
- When you use Red Hat OpenStack Platform (RHOSP) director to deploy Ceph, director enables Ceph Object Gateway (RGW) object storage, which replaces the Object Storage service (swift). All other services that normally use the Object Storage service can start using RGW instead without additional configuration.
- Red Hat Ceph Storage cluster deployment in new environments
In new environments, the Red Hat Ceph Storage cluster is deployed first, before the overcloud, using director and the openstack overcloud ceph deploy command. You now use cephadm to deploy Ceph, because deployment with ceph-ansible is deprecated. For more information about deploying Ceph, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director. This document replaces Deploying an overcloud with containerized Red Hat Ceph.
A Red Hat Ceph Storage cluster that you deployed without RHOSP director is also supported. For more information, see Integrating an Overcloud with an Existing Red Hat Ceph Storage Cluster.
- Support for creating shares from snapshots
- You can create a new share from a snapshot to restore snapshots by using the Shared File Systems service (manila) CephFS back ends: native CephFS and CephFS through NFS.
2.4. Compute
This section outlines the top new features for the Compute service.
- Support for attaching and detaching SR-IOV devices to an instance
- Cloud users can create a port that has an SR-IOV vNIC, and attach the port to an instance when there is a free SR-IOV device on the host on the appropriate physical network, and the instance has a free PCIe slot. For more information, see Attaching a port to an instance.
- Support for creating an instance with NUMA-affinity on the port
- Cloud users can create a port that has a NUMA affinity policy, and attach the port to an instance. For more information, see Creating an instance with NUMA affinity on the port.
- Q35 is the default machine type
-
The default machine type for each host architecture is Q35 (
pc-q35-rhel9.0.0
) for new Red Hat OpenStack Platform 17.0 deployments. The Q35 machine type provides several benefits and improvements, including live migration of instances between different RHEL 9.x minor releases, and native PCIe hotplug which is faster than the ACPI hotplug used by thei440fx
machine type.
2.5. Networking
This section outlines the top new features for the Networking service.
- Active/Active clustered database service model improves OVS database read performance and fault tolerance
Starting in RHOSP 17.0, RHOSP ML2/OVN deployments use a clustered database service model that applies the Raft consensus algorithm to enhance performance of OVS database protocol traffic and provide faster, more reliable failover handling. The clustered database service model replaces the pacemaker-based, active/backup model.
A clustered database operates on a cluster of at least three database servers on different hosts. Servers use the Raft consensus algorithm to synchronize writes and share network traffic continuously across the cluster. The cluster elects one server as the leader. All servers in the cluster can handle database read operations, which mitigates potential bottlenecks on the control plane. Write operations are handled by the cluster leader.
If a server fails, a new cluster leader is elected and the traffic is redistributed among the remaining operational servers. The clustered database service model handles failovers more efficiently than the pacemaker-based model did. This mitigates related downtime and complications that can occur with longer failover times.
The leader election process requires a majority, so the fault tolerance capacity is limited by the highest odd number in the cluster. For example, a three-server cluster continues to operate if one server fails. A five-server cluster tolerates up to two failures. Increasing the number of servers to an even number does not increase fault tolerance. For example, a four-server cluster cannot tolerate more failures than a three-server cluster.
Most RHOSP deployments use three servers.
Clusters larger than five servers also work, with every two added servers allowing the cluster to tolerate an additional failure, but write performance decreases.
The clustered database model is the default in RHOSP 17.0 deployments. You do not need to perform any configuration steps.
- Designate DNSaaS
- In Red Hat OpenStack Platform (RHOSP) 17.0, the DNS service (designate) is now fully supported. Designate is an official OpenStack project that provides DNS-as-a-Service (DNSaaS) implementation and enables you to manage DNS records and zones in the cloud. The DNS service provides a REST API, and is integrated with the RHOSP Identity service (keystone) for user management. Using RHOSP director you can deploy BIND instances to contain DNS records, or you can integrate the DNS service into an existing BIND infrastructure. (Integration with an existing BIND infrastructure is a technical preview feature.) In addition, director can configure DNS service integration with the RHOSP Networking service (neutron) to automatically create records for virtual machine instances, network ports, and floating IPs. For more information, see Using Designate for DNS-as-a-Service.
2.6. Validation Framework
This section outlines the top new features for the Validation Framework.
- User-created validations through the CLI
- In Red Hat OpenStack Platform (RHOSP) 17.0, you can create your own personalized validation with the validation init command. Execution of the command results in a template for a new validation. You can edit the new validation role to suit your requirements.
2.7. Technology previews
This section provides an overview of the top new technology previews in this release of Red Hat OpenStack Platform.
For more information on the support scope for features marked as technology previews, see Technology Preview Features Support Scope.
- Border Gateway Protocol (BGP)
- In Red Hat OpenStack Platform (RHOSP) 17.0, a technology preview is available for Border Gateway Protocol (BGP) to route the control plane, floating IPs, and workloads in provider networks. By using BGP advertisements, you do not need to configure static routes in the fabric, and RHOSP can be deployed in a pure Layer 3 data center. RHOSP uses Free Range Routing (FRR) as the dynamic routing solution to advertise and withdraw routes to control plane endpoints as well as to VMs in provider networks and Floating IPs.
- Integrating existing BIND servers with the DNS service
- In Red Hat OpenStack Platform (RHOSP) 17.0, a technology preview is available for integrating the RHOSP DNS service (designate) with an existing BIND infrastructure. For more information, see Configuring existing BIND servers for the DNS service.