Chapter 2. Top New Features


This section provides an overview of the top new features in this release of Red Hat OpenStack Platform.

2.1. Red Hat OpenStack Platform Director

This section outlines the top new features for the director.

Ansible-driven deployment using director

With this release, Ansible is now integrated into the deployment process. This allows you to use Ansible tooling for certain tasks, including dry-run, targeted tasks execution, among others:

  • Ansible now performs software configuration in Director, using the feature name config-download.

    • This capability was previously Tech Preview in Red Hat OpenStack Platform 13, and is now entering General Availability in Red Hat OpenStack Platform 14.
  • Heat still defines the software configuration, but does not apply it:

    • The configuration is made available by Heat.
    • ansible-playbook downloads the configuration and then applies it. The undercloud serves as the Ansible control node.
Advanced subscription manager

You can now define which roles will consume a particular subscription/pool. This means that you can use only the subscriptions you need.

  • New Ansible role added to manage subscriptions.
  • Richer management options.
  • Ability to assign subscriptions/pools per role.
Removal of Ceph and OpenStack services from Overcloud images

As a result of the container implementation, these services have been changed:

  • Removal of OpenStack services.
  • Removal of Ceph packages.
  • OpenStack clients are still installed.
  • Minimal OpenStack content required for deployment.

    • Note that python-heat-agents are still installed.
  • Ceph entitlements are no longer needed for all nodes (an alternative product SKU is available).
Automated container image building

You can use director to build a customized container image based on your own definition, allowing you to avoid extra manual steps before deployment.

  • New Ansible role to automate image customization.
  • The operator defines the docker file.
  • Director can build an extended container image based on a given definition, and push it to the registry.
Containerized and unified undercloud

This release uses a unified installation procedure for the undercloud and overcloud, letting you take advantage of overcloud deployment features.

  • No need to learn or maintain separate procedures.
  • The undercloud runs in containers.
  • Improvements have been added to the overcloud deploy process.
  • You can define the required set of services.
  • You may find that this approach makes it easier to evaluate Red Hat OpenStack Platform.

2.2. Bare Metal Service

This section outlines the top new features for the Bare Metal (ironic) service.

Bare metal deployment options
Director in OSP 14 can deploy the OpenShift Container Platform on bare metal nodes on RHEL using the openshift-ansible templates under the hood, transparently for the operator, who has to interact only with director. Director will also allow to add and remove OCP nodes accordingly.

2.3. Ceph Storage

This section outlines the top new features for Ceph Storage.

Create and manage a multi-tier Ceph storage via director

Using OpenStack director, you can deploy different Red Hat Ceph Storage performance tiers by adding new Ceph nodes dedicated to a specific tier in a Ceph cluster.

For example, you can add new object storage daemon (OSD) nodes with SSD drives to an existing Ceph cluster to create a Block Storage (cinder) back end exclusively for storing data on these nodes. A user creating a new Block Storage volume can then choose the desired performance tier: either HDDs or the new SSDs.

This type of deployment requires Red Hat OpenStack Platform director to pass a customized CRUSH map to ceph-ansible. The CRUSH map allows you to split OSD nodes based on disk performance, but you can also use this feature for mapping physical infrastructure layout.

Improved integration with ceph-ansible
This release rewrites director’s ceph-ansible integration to work with the new config-download feature to provide a better user experience. Users can more easily troubleshoot director Ceph deployments by using the Ansible external_deploy_steps tag.

2.4. Compute

This section outlines the top new features for the Compute service.

TX/RX Queue Sizing
You can configure the queue size of TX and RX traffic for libvirt and virtio interfaces. You can define the queue size for each host or each guest as needed, to improve performance and handle increased traffic use-cases. The parameter for the TX/RX queue size is available in the relevant role data file before deployment, and in the nova.conf file after deployment.
Trusted Virtual Functions (VFs) for SR-IOV
You can designate instances as trusted, which then enables you to change the MAC address of the VF and enable promiscuous mode directly from the guest instance. These functions help you configure failover VFs for instances directly from the instance.
NFS backend for Nova
You can mount Compute instances from an NFS export, and maintain a shared NFS storage backend for instances. This functionality works in a similar way to the NFS storage backend for Glance and Cinder.
Reserved huge pages
You can allocate huge pages to specific Compute nodes to support high-performance workloads. To reserve huge pages for specific nodes, set the reserved_huge_pages parameter in the Director before deployment. The configuration is then available in the nova.conf file after deployment.

2.5. Metrics and Monitoring

This section outlines the top new features and changes for the metrics and monitoring components.

2.6. Network Functions Virtualization

This section outlines the top new features for Network Functions Virtualization (NFV).

Configure emulator threading per host

You can configure deterministic performance by not over-committing a vCPU in QEMU, in order to avoid spurious packet drops. In a given OSP-d composable role, you can now choose which host CPUs will run the QEMU emulator threads. For example:

parameter_defaults:
  ComputeOvsDpdkParameters:
    NovaComputeCpuSharedSet: "0-1"

Red Hat’s recommendation is to use the same CPU set as the host (non-isolated CPUs):

HostCpusList: "0-1"

This is then activated per VM flavor:

hw:emulator_threads_policy=share
Use introspection to calculate NFV parameters

You can use introspection to calculate certain SR-IOV and OVS-DPDK director parameters. This is expected to ease the deployment of NFVi. For example:

workflow_parameters:  tripleo.derive_params.v1.derive_parameters:
    num_phy_cores_per_numa_node_for_pmd: 2
    huge_page_allocation_percentage: 90

2.7. OpenStack Networking

This section outlines the top new features for the Networking service.

ML2/OVS to ML2/OVN Migration
This update provides an in-place migration strategy from ML2/OVS to ML2/OVN in either ovs-firewall or ovs-hybrid mode for an OpenStack deployment with director.
Neutron internal DNS resolution
The DHCP agent now passes dns_domain to the network’s dnsmasq process, in turn passing it to the instances.
OVN services status report
The openstack network agent list command now reports on all OVN services and their status.
Octavia (LBaaS) improved deployment
The latest Octavia images are automatically pushed during update or upgrade.
Octavia controller container health monitoring
This release introduces the ability to monitor Octavia container service VM health.
Multi-tenant BMaaS with new Ansible Networking ML2 plugin
This release allows multiple tenants to use nodes in an isolated fashion.

2.8. Storage

Block Storage - Support for signed Glance images
The Block Storage Service (cinder) automatically validates the signature of any downloaded, signed image during volume creation. The signature is validated before the image is written to the volume. Users now have stronger assurances of the integrity of the image data they are using to create volumes. This feature does not work with Ceph storage.
Block Storage - Migration between cinder availability zones
Volume migration across availability zones was added to the Block Storage service (cinder) so users can migrate volume from one availability zone to another.
Block Storage - Cinder backup NFS support
Prior to this release, the Red Hat OpenStack Platform director could only deploy the Object Storage service (swift) or Red Hat Ceph Storage as a backup back end. The Block Storage service (cinder) backup NFS profile support was introduced in director to expand Red Hat OpenStack Platform deployment to support Ceph, NFS, and Swift as backup targets. Now, director can deploy NFS as the back end for the backup service using the CinderBackupBackEnd parameter in the cinder-backup.yaml Heat template.
Block Storage - Optimized RBD to RBD migration
This release implements an optimized Ceph RBD to RBD block-level volume migration to take advantage of the underlying Ceph back end capabilities when both back ends (source and target) reside on the same Ceph cluster. This feature enables faster and more efficient data migration operations, such as when you retire old hardware, move between tiers, and so forth.
Data Processing - S3 compatible object stores
This release introduces Hadoop support for S3-compatible object stores in the Data Processing service (sahara). This feature follows on the efforts to make data sources and job binaries "pluggable". The S3 support is an additional alternative to the existing HDFS, swift, MapR-FS, and manila storage options.
Image Service - Transparent image conversion
When importing a new image, the Image service (glance) now automatically converts the image format from QCOW2 to RAW as the destination format (without intervention) when using Ceph as the backend for the Image service.
Object Storage - Object Storage S3 API by default
The S3 API is regarded by the industry as the defacto object storage standard API. Red Hat Openstack Object Storage service (swift) supported the S3 API by using the Swift3 middleware, as a post-deployment manual operation. Starting with this release, the Swift3 middleware is set by default on an overcloud deployment.
Shared File System - Manila share-type quotas support
Cloud administrators can now define the quota for the number of shares for a given share type. This functionality is similar to the one offered by the Block Storage service (cinder) for quotas per volume type. In setups with multiple share types, the per share type quota allows resource providers to have better control over the provisioned resources.
Shared File System - User message support
Until this release, if manila operations failed asynchronously (e.g., to create share or create share group), the user did not receive any detailed information. This new capability provides more information to users about failed asynchronous operations to better troubleshoot their errors and possibly recover, without cloud administrator intervention.

2.9. Technology Previews

This section outlines features that are in technology preview in Red Hat OpenStack Platform 14.

Note

For more information on the support scope for features marked as technology previews, see Technology Preview Features Support Scope.

2.9.1. New Technology Previews

The following new features are provided as technology previews:

Virtual GPU (vGPU) support for instances

To access GPU-based rendering on your guest instances, you can define and manage virtual GPU (vGPU) resources according to your available physical GPU devices. This configuration allows you to more effectively divide the rendering workloads between all your physical GPU devices, and to have more control over scheduling, tuning, and monitoring your vGPU-enabled guest instances.

Note
  • Currently vGPU support is provided as a technical preview only for NVIDIA GRID vGPU devices. You must comply with the NVIDIA GRID licensing requirements.
  • Only one vGPU type is supported per physical GPU, and only one vGPU resource is supported per guest instance.
NUMA-aware vSwitches
OpenStack Compute now takes into account the NUMA node location of physical NICs when launching a Compute instance. This helps to reduce latency and improve performance when managing DPDK-enabled interfaces.
OpenDaylight - VXLAN DSCP inheritance
OpenDaylight supports DSCP inheritance, whereby DSCP markings on the inner IP header are replicated to the DSCP markings on the outer IP header for VXLAN encapsulated packets. With this feature, tenant traffic is forwarded over VXLAN tunnels based on DSCP markings from the tenant.
Automatic restart of instances on Compute node reboot

You can now configure automatic restart of instances on a Compute node even if you do not migrate the instances first. The Compute service and the libvirt-guests agent can be configured to gracefully shut down the instances and then start the instances again after the Compute node reboots.

The following parameters are available:

  • NovaResumeGuestsStateOnHostBoot (True/False)
  • NovaResumeGuestsShutdownTimeout (default 300s)
Skydive - network visualization suite

Skydive is a complete network visualization and monitoring suite, targeted for the cloud operator. Features include the following.

  • Network topology discovery
  • Live and historical analysis
  • Metrics and alerting system
  • Packet generator for tracing and validating network infrastructure

    Skydive is fully integrated with OSP director. It supports all OVS based systems, including OVN and OpenDaylight. It exposes REST API, command line interface (CLI), and Web UI.

Metrics and Monitoring - Service Assurance Framework

This releases adds a Technology Preview of the Service Assurance Framework, allowing for metrics and events monitoring at scale. This is a platform-based approach to Metrics and Monitoring, and is based on the following elements:

  • Collectd plug-ins for infrastructure and OpenStack service monitoring.
  • AMQ Interconnect direct routing (QDR) message bus.
  • Prometheus Operator database/management cluster.
  • Ceilometer/Gnocchi for chargeback/capacity planning only.
Block Storage - Attach a volume to multiple hosts
This release adds the ability to attach a volume to multiple hosts or servers simultaneously in both cinder and nova with read/write (RW) mode when this feature is supported by the back end driver. This feature addresses the clustered application workloads use case that typically requires active/active or active/standby scenarios.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.