Release notes
Release notes for the Red Hat OpenStack Services on OpenShift 18.0 release
Abstract
The release notes provide high-level coverage of the improvements and additions that have been implemented in Red Hat OpenStack Services on OpenShift 18.0 and document known problems in this release, as well as notable bug fixes, technology previews, deprecated functionality, and other details.Preface Copy linkLink copied to clipboard!
The release notes provide high-level coverage of the improvements and additions that have been implemented in Red Hat OpenStack Services on OpenShift 18.0 and document known problems in this release, as well as notable bug fixes, technology previews, deprecated functionality, and other details.
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Tell us how we can make it better.
- Providing documentation feedback in Jira
Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback.
- Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback.
- Click the following link to open a the Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. New and enhanced features Copy linkLink copied to clipboard!
Review features that have been added to or significantly enhanced in Red Hat OpenStack Services on OpenShift (RHOSO) releases through RHOSO 18.0.14 (Feature Release 4).
Red Hat stopped updating this chapter after RHOSO 18.0.14 (Feature Release 4). Starting with RHOSO 18.0.17 (Feature Release 5), you can find new and enhanced feature content under the heading "New features" in "Chapter 2: Release Information RHOSO 18.0."
RHOSO improves substantially over previous versions of Red Hat OpenStack Platform (RHOSP). The RHOSO control plane is natively hosted on the Red Hat OpenShift Container Platform (RHOCP) and the external RHEL-based data plane and workloads are managed with Ansible. This shift in architecture aligns with Red Hat’s platform infrastructure strategy. You can future proof your existing investments by using RHOCP as a hosting platform for all of your infrastructure services.
For information about mapping RHOSO versions to OpenStack Operators and OpenStackVersion Custom Resources (CRs), see the Red Hat knowledge base article at https://access.redhat.com/articles/7125383.
RHOSP 17.1 is the last version of the product to use the director-based OpenStack on OpenStack form-factor for the control plane.
1.1. New and enhanced features in 18.0.14 (FR4) Copy linkLink copied to clipboard!
Review features that have been added to or significantly enhanced in Red Hat OpenStack Services on OpenShift.
1.1.1. Compute Copy linkLink copied to clipboard!
- Erase all data on an NVMe device by using the NVMe cleanup agent
- You can now deploy and configure the NVMe cleanup agent on data plane nodes to securely erase all data on the NVMe device before it is reallocated to the next instance.
1.1.2. Data plane Copy linkLink copied to clipboard!
- Deploy data plane nodes with Image Mode (bootc) images
- Deploying data plane nodes using Image Mode (bootc) images is provided as a Technology Preview feature in this release. Technology Preview features are not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment.
1.1.3. Documentation Copy linkLink copied to clipboard!
- Documentation library restructured
The documentation library page was restructured to align better with the user life cycle and top-level jobs. The new structure includes the following enhancements:
- Validated Architectures were moved to a new category "Deploy a Validated Architecture environment".
- Several guides were moved to different categories to align with the actual user life cycle needs.
- The guide Deploying RHOSO at scale was renamed to Planning a large-scale RHOSO deployment.
- The new guide Migrating VMs to a Red Hat OpenStack Services on Openshift deployment was added.
1.1.4. High availability Copy linkLink copied to clipboard!
- Configuring authentication for the memcached service
- Starting with RHOSO 18.0.14 (Feature Release 4), you can configure the cache maintained by the memcached service to require authentication to increase the security of your cloud by restricting access to the cached data of your cloud. For more information, see Configuring authentication for the memcached service in Customizing the Red Hat OpenStack Services on OpenShift deployment.
- Configuring quorum queues for RabbitMQ in new deployments
-
Starting with RHOSO 18.0.14 (Feature Release 4), RabbitMQ supports the use of the
Quorumqueues for new RHOSO deployments. AQuorumqueue is a durable, replicated queue based on the Raft consensus algorithm, providing increased data safety and high availability. For more information, see step 5 of Creating the control plane in Deploying Red Hat OpenStack Services on OpenShift .
1.1.5. Migration Copy linkLink copied to clipboard!
- Migrate VMs with VMware Migration Toolkit
- In RHOSO 18.0.14 (Feature Release 4) and RHOSP 17.1, you can now migrate workloads from VMware to OpenStack using the VMware Migration Toolkit.
1.1.6. Networking Copy linkLink copied to clipboard!
- Observability metrics expanded from data plane nodes to data plane and control plane pods
- The Prometheus OVS/OVN Exporter was previously available only on data plane nodes. Starting with RHOSO 18.0.14 (Feature Release 4), Prometheus OVS/OVN Exporter is also available on control plane pods. New groups of metrics are also included. For more information, see Network observability in Managing networking resources.
- Firewall-as-a-Service (FWaaS) Technology Preview
In RHOSO 18.0.14 (Feature Release 4), you can test a Technology Preview of Firewall-as-a-Service (FWaaS). Do not use technology preview features in production environments. As more OpenStack-based clouds are adopted for multi-tenant applications, security remains a top priority. Network-level isolation and traffic control become critical, especially in public or hybrid cloud environments.
Although security groups provide sufficient capability to specify security policy at a VM instance level or VM port level, it does not have support to specify policy at a network or router port level. FWaaS project provides this additional capability to specify the security policies at the router port level and enables specifying multiple policy rules within the same policy group and also supports application of L3 or L2 policy at the router port level. With the FWaaS Technology Preview, you can also test NGFW 3rd party plugins for integration with NGFW vendor solutions enabling firewall capabilities beyond the ACL level, including capabilities such as DPI, Malware protection, IPS, and IDP.
- TAP-as-a-Service (TAPaas) Technology Preview
In RHOSO 18.0.14 (Feature Release 4), you can test a Technology Preview of TAP-as-a-Service (TAPaaS). Do not use technology preview features in production environments. As modern cloud infrastructure becomes increasingly complex and multi-tenant, observability and security monitoring have become foundational requirements for OpenStack operators. One key network diagnostic technique used in traditional and virtualized environments is port mirroring, which allows administrators to capture and analyze traffic flowing through a particular interface. Mirrored traffic can be re-directed to third party analytics tools and solutions hosted on a different or same host as the mirror port. Typically, the mirrored traffic is carried over overlay tunnels established between the source and destination of the mirror.
You can perform the following tasks with port mirroring:
- Security monitoring: Capture mirrored traffic for inspection by IDS/IPS tools.
- Performance analysis: Monitor bottlenecks, latency, and packet loss in real-time.
- Troubleshooting: Debug issues without logging into tenant VMs or affecting production traffic.
- Compliance auditing: Log and analyze data flows for regulatory purposes.
- Lawful intercept: In jurisdictions that require service providers to support legal requests for targeted surveillance, TAPaaS offers a programmable, isolated way to mirror traffic for specific endpoints without impacting other tenants or violating privacy constraints.
Port mirroring is available at OVS and OVN levels through a CLI interface, however, in highly dynamic, software-defined environments like OpenStack, traditional port mirroring does not scale well and does not offer the tenant-level abstraction and isolation. TAPaaS provides a Openstack integrated framework for scalable port mirroring in a multi-tenant shared environment maintaining the tenant isolation boundaries in Openstack deployments. TAPaaS is a Neutron extension that enables on-demand traffic mirroring for tenant or administrator purposes. It allows users to create TAP services that mirror traffic from one or more Neutron ports and redirect it to a TAP destination—often a virtual Network Packet Broker (NPB), intrusion detection system (IDS), or traffic analyzer instance.
- Load-balancing service (Octavia) support for DCN deployments
- In RHOSO 18.0.14, creating load balancers in availability zones (AZs) are now fully supported. For more information, see Creating availability zones for load balancing of network traffic at the edge in Configuring load balancing as a service.
- DNS service (designate)
- In RHOSO 18.0.14, the DNS service (designate) is now fully supported. For more information, see Configuring DNS as a service.
- Dynamic routing with BGP support for IPv6 networks
- In RHOSO 18.0.14, you can configure your dynamic routing environment using IPv6 networks. For more information, see Preparing RHOCP for BGP networks on RHOSO.
- Avoiding taskflow interruptions by using flow resumption
- In RHOSO 18.0.14, you can use Load-balancing service (octavia) flow resumption, which automatically reassigns the flow to an alternate controller if the original controller shuts down unexpectedly. For more information, see Avoiding taskflow interruptions by using flow resumption.
- OVN provider driver for Load-balancing service (octavia) is now fully supported
- In RHOSO 18.0.14, the OVN provider driver for the Load-balancing service is no longer a Technology Preview and is now fully supported. For more information, see Load-balancing service provider drivers .
1.1.7. Security Copy linkLink copied to clipboard!
- Multi-realm federation support
- Starting with RHOSO 18.0.14 (Feature Release 4), you can configure RHOSO to allow users to log in to the OpenStack Dashboard by using single sign-on (SSO) and select from one of several external Identity Providers (IdPs). For more information, see Configuring multi-realm federated authentication in Configuring security services.
1.1.8. Storage Copy linkLink copied to clipboard!
- Notifications for events in the Block Storage service and Shared File Systems service
-
In RHOSO 18.0.14 (Feature Release 4), you can enable notifications in the Block Storage service (cinder) and Shared File System service (manila) by using the
notificationsBusInstanceparameter, allowing integration with either the existing RabbitMQ instance or a dedicated RabbitMQ instance. - Deployment of Object Storage service on data plane nodes
-
In RHOSO 18.0.14 (Feature Release 4), you can deploy the Object Storage service (swift) on external data plane nodes, improving scalability and performance for large storage clusters. By enabling DNS forwarding and creating an
OpenStackDataPlaneNodeSetCR with specified properties, including disks for storage, you can customize the service configuration through additionalConfigMaporSecretCRs in theOpenStackDataPlaneServiceCR. - Shared File Systems service now supports transferring shares between tenants
- In RHOSO 18.0.14 (Feature Release 4), the Shared File Systems service (manila) now supports transferring shares across projects. To ensure security and non-repudiation, a one-time transfer secret key is generated when you initiate a transfer. The key must be conveyed out-of-band so that a user in the recipient project can complete the transfer.
1.1.9. Upgrades and updates Copy linkLink copied to clipboard!
- Prevent minor update from proceeding when the custom container images have not been updated
This enhancement ensures correct version tracking and validation during minor updates by preventing the side effects and inconsistencies that result from custom container images not being updated when the target version is updated.
With this update, when a minor update is initiated by setting the
targetVersion, the performance of the minor update is halted if thecustomImagesversion for the associated custom container images is not also updated. Users have the option to force the update if necessary.- Adopt RHOSP 17.1 Instance HA environments to RHOSO
- Starting with RHOSO 18.0.14 (Feature Release 4), you can adopt Red Hat OpenStack Platform (RHOSP) 17.1 environments with Instance HA enabled to RHOSO 18.0. For more information about adopting Instance HA environments, see Preparing an Instance HA deployment for adoption and Enabling the high availability for Compute instances service in Adopting a Red Hat OpenStack Platform 17.1 deployment.
- Shared File Systems service (manila) with CephFS through NFS adoption is fully supported
Adopting the Shared File Systems service (manila) with CephFS through NFS is now generally available. Previously, these adoption instructions were provided as a Technology Preview.
This enhancement allows you to migrate your existing Red Hat OpenStack Platform 17.1 deployment that uses CephFS through NFS as a back end for the Shared File Systems Service to RHOSO 18.0 with full support.
The adoption process includes:
- Creating a new clustered NFS Ganesha service managed directly on the Red Hat Ceph cluster
-
Migrating export locations from the standalone Pacemaker-controlled
ceph-nfsservice to the new clustered service Decommissioning the previous standalone NFS service
For more information, see Changes to CephFS through NFS and Creating an NFS Ganesha cluster in Adopting a Red Hat OpenStack Platform 17.1 deployment.
- Full support for adopting environments that use iSCSI back ends for the Block Storage service (cinder)
- Starting with RHOSO 18.0.14 (Feature Release 4), the procedure to adopt RHOSO 18.0 is fully supported for Red Hat OpenStack Platform 17.1 environments that use iSCSI as a back end for the Block Storage service (cinder). For more information, see Adopting the Block Storage service in Adopting a Red Hat OpenStack Platform 17.1 deployment.
- Full support for adopting environments that use Block Storage service (cinder) back ends for the Image service (glance)
- Starting with RHOSO 18.0.14 (Feature Release 4), RHOSO 18.0 adoption is fully supported for Red Hat OpenStack Platform 17.1 environments that use Block Storage service (cinder) as a back end for the Image service (glance). For more information, see Adopting the Image service that is deployed with a Block Storage service back end in Adopting a Red Hat OpenStack Platform 17.1 deployment.
1.2. New and enhanced features in 18.0.10 (FR3) Copy linkLink copied to clipboard!
Review features that have been added to or significantly enhanced in Red Hat OpenStack Services on OpenShift.
1.2.1. Bare Metal Provisioning Copy linkLink copied to clipboard!
- Layer 2 network configuration using Networking Generic Switch in the Bare Metal Provisioning service (Technology Preview)
- RHOSO 18.0.10 (Feature Release 3) introduces support for the configuration of L2 networks on non-provisioning NIC interfaces when using Baremetal as a Service (BMaaS) through the Bare Metal Provisioning service (ironic). This feature allows network configuration on switches by leveraging the networking-generic-switch Modular Layer 2 Neutron Mechanism driver.
1.2.2. Compute Copy linkLink copied to clipboard!
- PCI device tracking in the Placement service is now generally available
- Previously, this feature was available as Technology Preview. You can use the Placement service to observe the PCI resource availability and usage across the whole cloud through the Placement API. The administrator can also reserve PCI devices for maintenance through the Placement API.
- Configuration of notifications to the Telemetry service
- Starting with RHOSO 18.0.10 (Feature Release 3), you can configure the Compute service (nova) to provide notifications to Telemetry services in your RHOSO environment.
- Setting the maximum number of vGPUs that an SR-IOV NVIDIA GPU can create
- Starting with RHOSO 18.0.10 (Feature Release 3), you can define the maximum number of vGPUs that a SR-IOV NVIDIA GPU can create.
- Reserving One Time Use devices
- Starting with RHOSO 18.0.10 (Feature Release 3), you can tag PCI devices as One Time Use (OTU) to reserve them for a single use by a single instance.
1.2.3. Control plane Copy linkLink copied to clipboard!
- Multiple RHOSO deployments on a single RHOCP cluster by using namespace separation
- Starting with RHOSO 18.0.10 (Feature Release 3), you can deploy multiple RHOSO environments on a single RHOCP cluster by using namespace (project) isolation.
Do not deploy multiple RHOSO environments on a single cluster with namespace separation in production. Multiple deployments are suitable only for development, staging, and testing environments.
- Documentation: Guidance for deploying Red Hat OpenStack Services on OpenShift in a disconnected environment
- RHOSO 18.0.10 (Feature Release 3) introduces documentation support for deploying Red Hat OpenStack Services on OpenShift (RHOSO) in a disconnected environment. For more information, see Deploying Red Hat OpenStack Services on OpenShift in a disconnected environment.
1.2.4. Dashboard Copy linkLink copied to clipboard!
- The
horizon-operatorcreates an additional sidecar container for logging -
Starting with RHOSO 18.0.10 (Feature Release 3), the Dashboard service
horizon-operatorimplements a separate sidecar container to ensure the availability of logs for debugging. If you use a custom container image, you might need to rebuild your custom image when updating.
1.2.5. Networking Copy linkLink copied to clipboard!
- DNS service (designate) (Technology Preview)
- With this technology preview, you can test the management of DNS records, names, and zones using the DNS service (designate). For more information, see Configuring DNS as a service.
- Vertical scaling for load-balancing service (Octavia) instances (amphorae)
-
Starting with RHOSO 18.0.10 (Feature Release 3), RHOSO supports vertical scaling for load-balancing service instances. Users can scale-up their load balancers, increasing the CPU and RAM of the load-balancing instance, to improve performance and capacity. Vertically scaling increases the volume of network traffic processed. To scale-up a load balancer, use the appropriate load-balancing flavor when you create a load balancer. RHOSO ships with
amphora-4vcpus, which creates an instance that contains 4 vCPUs, 4GB RAM, and 3GB of disk space. Your RHOSO administrator can create other custom load-balancing flavors that meet the load-balancing needs of your particular environment. For more information, see Creating Load-balancing service flavors in Configuring load balancing as a service. - Load-balancing service (Octavia) support for DCN deployments (Technology Preview)
- With this technology preview, you can create load balancers in a distributed compute node (DCN) environment to increase traffic throughput and reduce latency. For more information, see Creating availability zones for load balancing of network traffic at the edge in Configuring load balancing as a service.
- Load-balancing service (Octavia) TLS client authentication
- Starting with RHOSO 18.0.10 (Feature Release 3), you can secure your web client communication with a load balancer by using two-way TLS authentication. For more information, see Creating a TLS-terminated HTTPS load balancer with client authentication in Configuring load balancing as a service.
- BGP-EVPN support for provider network workloads without FDP support (Developer Preview)
- Starting with RHOSO 18.0.10 (Feature Release 3), you can test a developer preview of BGP-EVPN support for provider network workloads without FDP support. Openstack provides a mature infrastructure platform for virtualized workload focussing on on-prem environments. With most of the Telco 4G workloads running on virtualized platforms and the expanding landscape of multiple sites and clusters, there is an imperative need for connectivity across the clusters that enables tenant workload deployment across multiple clusters. In addition to providing control plane and data plane isolation in a shared environment, there is a need for multi-tenancy extending to the compute nodes. RHOSO 18 FR3 adds support for BGP-EVPN enabling multi-tenant, multi-VRF support with overlapping IP addresses for provider network workloads. The feature is available as developer preview in RHOSO 18 FR3 and is suitable for functional operation and testing in lab environments only.
- Prometheus Exporter for OVN logical routers and logical switches
- Starting with RHOSO 18.0.10 (Feature Release 3), you can use Prometheus Exporter for OVN logical routers and logical switches. Network observability requires metrics and KPIs to be available at the OVN layer, exposing packet statistics within the networking infrastructure orchestrated by OVN. RHOSO 18 FR3 adds support for monitoring metrics at the OVN layer (logical routers and switches) via prometheus exporter, allowing correlation between the top-level content management system (CMS), logical OVN, and physical representations of networking elements.
- New OVN database synchronization tool to fix OVN load balancers
-
RHOSO 18.0.10 (Feature Release 3) introduces an OVN database synchronization tool to fix OVN load balancers that experience problems. The new tool,
octavia-ovn-db-sync-util, is run on the command-line to synchronize the state of Load-balancing service (octavia) resources, with the OVN databases. For more information, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_load_balancing_as_a_service/troubleshoot-maintain-lb-service_rhoso-lbaas#synch-lbs-ovn-provider_trbls-lbs
1.2.6. NFV Copy linkLink copied to clipboard!
- OVS-DPDK is now supported for all workloads
- RHOSO 18.0.10 (Feature Release 3) introduces support for OVS-DPDK for all workloads. Previously, OVS-DPDK was only supported in NFV workloads.
- TCP segmentation offloading (TSO) for OVS-DPDK is now generally available
- Previously, TSO for OVS-DPDK was available as a technology preview. Now it is generally available. TSO offloads segmentation to NICs, freeing up host CPU resources and improving overall performance.
- OVS-DPDK on networker nodes for acceleration of gateway traffic
- RHOSO 18.0.10 (Feature Release 3) introduces support of DPDK-accelerated Open vSwitch (OVS-DPDK) on Networker nodes. The DPDK datapath provides lower latency and higher performance than the standard kernel OVS datapath. OVS-DPDK is a high-performance, user-space solution that bundles Open vSwitch with the Data Plane Development Kit (DPDK). This technology is designed to process packets quickly by running mostly in the user-space, allowing applications to directly handle packet processing to or from the Network Interface Card (NIC).
1.2.7. Observability Copy linkLink copied to clipboard!
- Database and Compute metrics available to Prometheus for telemetry data collection and storage
- Starting with RHOSO 18.0.10 (Feature Release 3), the Telemetry service collects both database and Compute metrics and makes them available to Prometheus, enabling database telemetry and Compute node telemetry to be stored in the telemetry storage system.
1.2.8. Security Copy linkLink copied to clipboard!
- LDAP Support
- RHOSO 18.0.10 (Feature Release 3) introduces support for connecting the Identity service (keystone) to LDAP for authentication.
- Proteccio HSM support
- In RHOSO 18.0.10 (Feature Release 3), the Key Manager service (barbican) supports the Proteccio HSM as a back end to store secrets.
1.2.9. Storage Copy linkLink copied to clipboard!
- Distributed zones with third-party storage
- RHOSO 18.0.10 (Feature Release 3) introduces support for using certified third-party storage with distributed zones.
- Adopting the Image service (glance) with an NFS storage back end
- RHOSO 18.0.10 (Feature Release 3) introduces support for adopting the Image service from Red Hat OpenStack Platform (RHOSP) 17.1 with an NFS back end.
- Improved parallel image upload performance with load distribution
-
Starting with RHOSO 18.0.10 (Feature Release 3), you can improve parallel image upload performance by using the
mod_wsgipackage to distribute the load across workers. - Image service (glance) notifications for events in image lifecycle
-
Starting with RHOSO 18.0.10 (Feature Release 3), you can enable notifications in the Image service by using the
notificationBusInstanceparameter, allowing integration with either the existing RabbitMQ instance or a dedicated one. - Adopting the Block Storage service (cinder) with an NFS storage back end
- RHOSO 18.0.10 (Feature Release 3) introduces support for adopting the Block Storage service from Red Hat OpenStack Platform (RHOSP) 17.1 with an NFS back end.
- Remote ring storage supports larger deployments of the Object Storage service (Technology Preview)
- Starting with RHOSO 18.0.10 (Feature Release 3), you can use remotely stored rings to manage larger deployments of the Object storage service (swift).
- CephFS file name added to CephFS share metadata
-
Starting with RHOSO 18.0.10 (Feature Release 3), you can check a CephFS file name when mounting a native CephFS share by viewing the
mount_optionsmetadata of the share. Starting with RHOSO 18.0.10 (Feature Release 3), you can check a CephFS file name when mounting a native CephFS share by viewing themount_optionsmetadata of the share. - Adopting the Shared File Systems service (manila) with a third-party back end
- RHOSO 18.0.10 (Feature Release 3) introduces support for adopting the Shared File Systems service from Red Hat OpenStack Platform (RHOSP) 17.1 with a third-party back end, for example, NetApp or Dell.
1.2.10. Upgrades and updates Copy linkLink copied to clipboard!
- Granular package update workflow for RHOSO Compute nodes during the RHOSO update process (Technology Preview)
- RHOSO 18.0.10 (Feature Release 3) introduces a mechanism to break down the update process for RHOSO Compute nodes running RHEL 9.4 into two distinct phases: updating OpenStack-related RPM packages and updating system-related RPM packages. By enabling this separation, operators gain finer control over the update process, reducing risks and simplifying troubleshooting in the event of issues.
1.2.11. Resource optimization Copy linkLink copied to clipboard!
- Optimize service (watcher) strategies for resource optimization (Technology Preview)
- RHOSO 18.0.10 (Feature Release 3) introduces support for three new supported strategies in the Optimize service: host maintenance, zone migration for instances, and workload balance. For more information about supported strategies to achieve resource optimization goals, see Sample Optimize service workflows in Optimizing infrastructure resource utilization.
1.3. New and enhanced features in 18.0.6 (FR2) Copy linkLink copied to clipboard!
Review features that have been added to or significantly enhanced in Red Hat OpenStack Services on OpenShift.
1.3.1. Bare Metal Provisioning Copy linkLink copied to clipboard!
- RHOSO environment with a routed spine-leaf network topology
- RHOSO 18.0.6 (Feature Release 2) introduces support for deploying a RHOSO environment with a routed spine-leaf network topology. For more information, see Deploying a RHOSO environment with a routed spine-leaf network topology.
1.3.2. Control plane Copy linkLink copied to clipboard!
- Streamlined RHOSO service Operators installation and initialization
-
RHOSO 18.0.6 (Feature Release 2) introduces a new initialization resource that streamlines the management of the RHOSO service Operators under a single Operator Lifecycle Manager (OLM) bundle. After you install the OpenStack Operator and before creating the control plane, you now create the new
OpenStackinitialization resource, which installs all the RHOSO service Operators. - Distributed zones
- RHOSO 18.0.6 (Feature Release 2) introduces support for deploying a distributed control plane across multiple RHOCP cluster nodes that are located in distributed low latency L3 connected data centers.
- Custom environment variables for the OpenStackClient pod
-
Starting with RHOSO 18.0.6 (Feature Release 2), you can customize the
OpenStackClientpod environment variables to set the API version to use when connecting to the service API endpoints with theopenstackclientCLI. - Multiple RHOSO deployments on a single RHOCP cluster using namespace separation (Technology Preview)
- Starting with RHOSO 18.0.6 (Feature Release 2), you can test a technology preview of using namespace separation to deploy multiple RHOSO environments on a single RHOCP cluster. To deploy each RHOSO environment, create multiple isolated namespaces, then use the procedures in Deploying Red Hat OpenStack Services on OpenShift.
Ensure that the NMState Operator on each host worker node provides for the multiple VLANs that are required to enable network isolation for each namespace.
1.3.3. High availability Copy linkLink copied to clipboard!
- Instance high availability
- Starting with RHOSO 18.0.6 (Feature Release 2), you can use instance high availability (instance HA) to automatically evacuate and re-create instances on a different Compute node if a Compute node fails.
1.3.4. Networking Copy linkLink copied to clipboard!
- DNS service (Technology Preview)
- Starting with RHOSO 18.0.6 (Feature Release 2), you can test a technology preview of the RHOSO DNS service (designate), a multi-tenant service that enables you to manage DNS records, names, and zones.
- OVS-DPDK on networker nodes for OVN gateway acceleration
- Starting with RHOSO 18.0.6 (Feature Release 2), you can enable OVS-DPDK on networker nodes for improved forwarding performance.
- Support for
nmstateprovider in new greenfield deployments -
Starting with RHOSO 18.0.6 (Feature Release 2), the
nmstateprovider is supported for new RHOSO deployments. The defaultos-net-configprovider for new (greenfield) RHOSO deployments isifcfg. For limitations and other details, see https://issues.redhat.com/browse/OSPRH-11309. - TCP segmentation offload for RHOSO environments with OVS-DPDK (Technology Preview)
- Starting with RHOSO 18.0.6 (Feature Release 2), you can test a technology preview of TCP segmentation offload (TSO) for RHOSO environments with OVS-DPDK. For details, see OVS-DPDK with TCP segmentation offload (Technology Preview).
1.3.5. Observability Copy linkLink copied to clipboard!
- Power consumption monitoring
- Starting with RHOSO 18.0.6 (Feature Release 2), the visualization of IPMI power metrics is available in the dashboard. For more information, see https://issues.redhat.com/browse/OSPRH-10808.
- Enhanced OpenStack Observability
-
Starting with RHOSO 18.0.6 (Feature Release 2), you can use the
openstack-network-exporterto expose metrics from OVS or OVS-DPDK, OVN, and DPDK (PMD), and a dashboard has been added for these metrics. - Container health check
Starting with RHOSO 18.0.6 (Feature Release 2), you can use new metrics for monitoring the health of RHOSO services, including the following:
-
kube_pod_status_phase -
kube_pod_status_ready -
node_systemd_unit_state -
podman_container_state podman_container_healthYou can use
kube_pod_status_phaseandkube_pod_status_readyto monitor control plane services. For more information, see https://issues.redhat.com/browse/OSPRH-1052.
-
1.3.6. Security Copy linkLink copied to clipboard!
- Key Manager (barbican) support for Luna
- Starting with RHOSO 18.0.6 (Feature Release 2), when you install RHOSO, you have the option of using it with a Luna hardware security module (HSM). Using a hardware security module provides hardened protection for storing keys.
- Identity service (keystone) support for Federation
- RHOSO 18.0.6 (Feature Release 2) introduces Red Hat support for Red Hat Single Sign-On (RH-SSO) or Active Directory Federation Services (ADFS) as identity providers for RHOSO.
1.3.7. Storage Copy linkLink copied to clipboard!
- Integration with external Red Hat Ceph Storage (RHCS) clusters
- Starting with RHOSO 18.0.6 (Feature Release 2), you can integrate RHOSO with external Red Hat Ceph Storage 8 clusters (as well as Red Hat Ceph Storage 7 clusters) to include Red Hat Ceph Storage capabilities with your deployment. Due to known issues, not all Red Hat Ceph Storage 8 functionality is supported. For more information about these issues, see the Known Issues section.
- Image service (glance) support for S3 back end
- Starting with RHOSO 18.0.6 (Feature Release 2), you can configure the Image service with an S3 back end.
1.3.8. Upgrades and updates Copy linkLink copied to clipboard!
OpenStack Operator 18.0.6 now requires you to install a new initialization resource called OpenStack. You must create this resource when you update your RHOSO deployment from a version older than 18.0.6, or when you perform a new installation of 18.0.6. Also, if you deployed RHOSO 18.0.4 or earlier on RHOCP 4.16, you must create the OpenStack initialization resource before upgrading your RHOCP cluster to RHOCP 4.18.
RHOSO environments installed earlier than 18.0.6 have individual Operators, such as horizon-operator, nova-operator, and so on, in the openstack-operators namespace. Creation of the OpenStack resource automatically cleans up these unnecessary resources in the OpenShift environment. For more information about creating the OpenStack resource, see Installing the OpenStack Operator in Deploying Red Hat OpenStack Services on OpenShift.
- Baremetal as a service (ironic) adoption from RHOSP 17.1 to RHOSO 18.0 (Technology Preview)
- RHOSO 18.0.6 (Feature Release 2) introduces a technology preview of the ability to adopt Baremetal as a service (ironic) from RHOSP 17.1 to RHOSO 18.0. For details, see Adopting the Bare Metal Provisioning service in Adopting a Red Hat OpenStack Platform 17.1 deployment.
- IPv6 stack adoption from RHOSP 17.1 to RHOSO 18.0 (Technology Preview)
- Starting with RHOSO 18.0.6 (Feature Release 2), you can test a technology preview of configuring IPv6 networking for adoption. For more information, see Adopting a Red Hat OpenStack Platform 17.1 deployment.
- Kernel live patching for RHOSO environments (Technology Preview)
- Starting with RHOSO 18.0.6 (Feature Release 2), you can test a technology preview of kernel live patching support for RHOSO environments. With this feature, you can apply critical security updates and bug fixes to the kernel without requiring a system reboot. You cannot use this feature to apply custom live patches or third-party live patching solutions.
1.4. New and enhanced features in 18.0.3 (FR1) Copy linkLink copied to clipboard!
Review features that have been added to or significantly enhanced in Red Hat OpenStack Services on OpenShift.
1.4.1. Distributed Compute nodes (DCN) Copy linkLink copied to clipboard!
- DCN with Red Hat Ceph storage
- RHOSO 18.0.3 (Feature Release 1) introduces support for Distributed Compute Nodes (DCN) with persistent storage backed by Red Hat Ceph Storage.
1.4.2. Networking Copy linkLink copied to clipboard!
- Dynamic routing on data plane with FRR and BGP
- RHOSO 18.0.3 (Feature Release 1) introduces support of Free Range Routing (FRR) border gateway protocol (BGP) to provide dynamic routing capabilities on the RHOSO data plane.
- Limitations
- If you use dynamic routing, you must also use distributed virtual routing (DVR).
- If you use dynamic routing, you also use dedicated networker nodes.
- You can not use dynamic routing in an IPv6 deployment or a deployment that uses the Load-balancing service (octavia).
- Custom ML2 mechanism driver and SDN backend (Technology Preview)
- RHOSO 18.0.3 (Feature Release 1) allows you to test integration of the Networking service (neutron) with a custom ML2 mechanism driver and software defined networking (SDN) back end components, instead of the default OVN mechanism driver and back end components. Do not use this feature in a production environment.
- IPv6 metadata
- RHOSO 18.0.3 (Feature Release 1) introduces support of the IPv6 metadata service.
- NMstate provider for os-net-config (Development Preview)
-
RHOSO 18.0.3 (Feature Release 1) allows you to test a Development Preview of the NMstate provider for
os-net-config. To test the NMstate provider, setedpm_network_config_nmstate: true. Do NOT use this Development Preview setting in a production environment. - Forwarding database (FDB) learning and aging controls
RHOSO 18.0.3 (Feature Release 1) introduces FDB learning and related FDB aging parameters.
You can use FDB learning to prevent traffic flooding on ports that have port security disabled. Set
localnet_learn_fdbtotrue.Use the
fdb_age_thresholdparameter to set the maximum time (seconds) that the learned MACs stay in the FDB table. Use thefdb_removal_limitparameter to prevent OVN from removing a large number of FDB table entries at the same time.- Example configuration
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: unused spec: neutron: template: customServiceConfig: | [ovn] localnet_learn_fdb = true fdb_age_threshold = 300 [ovn_nb_global] fdb_removal_limit = 50
1.4.3. Observability Copy linkLink copied to clipboard!
- Power consumption monitoring (Technology Preview)
RHOSO 18.0.3 (Feature Release 1) introduces technology previews of power consumption monitoring capability for VM instances and virtual networking functions (VNFs).
See Jira Issue OSPRH-10006: Kepler Power Monitoring Metrics Visualization in RHOSO (Tech Preview) and Jira Issue OSPRH-46549: As a service provider I need a comprehensive dashboard that provides a power consumption matrix per VNF(Tech Preview).
- RabbitMQ metrics dashboard
- Starting in RHOSO 18.0.3 (Feature Release 1), RabbitMQ metrics are collected and stored in Prometheus. A new dashboard for displaying these metrics was added.
1.4.4. Upgrades and updates Copy linkLink copied to clipboard!
- Adoption from RHOSP 17.1
- RHOSO 18.0.3 (Feature Release 1) introduces the ability to use the adoption mechanism to upgrade from RHOSP 17.1 to RHOSO 18.0 while minimizing impacts to your workloads.
1.5. New and enhanced features in 18.0 (GA) Copy linkLink copied to clipboard!
Review features that have been added to or significantly enhanced in Red Hat OpenStack Services on OpenShift.
- Control plane, * Control plane deployed on Red Hat OpenShift Container Platform (RHOCP)
In RHOSO 18.0 (GA), the director-based undercloud is replaced by a control plane that is natively hosted on an RHOCP cluster and managed with the OpenStack Operator. The Red Hat OpenStack Services on OpenShift (RHOSO) control plane features include:
- Deployed in pods and governed by Kubernetes Operators.
- Deploys in minutes, consuming only a fraction of the CPU and RAM footprint required by earlier RHOSP releases.
- Takes advantage of native Kubernetes mechanisms for high availability.
- Features built-in monitoring based on RHOCP Observability.
- Dashboard
- Pinned CPUs: Starting with RHOSO 18.0 (GA), the OpenStack Dashboard service (horizon) shows how many pinned CPUs (pCPUs) are used and available to use in your environment.
- Data plane
Ansible-managed data plane: In RHOSO 18.0 (GA), the director-deployed overcloud is replaced by a data plane driven by the OpenStack Operator and executed by Ansible. RHOSO data plane features include:
-
The
OpenStackDataPlaneNodeSetcustom resource definition (CRD), which provides a highly parallel deployment model. -
Micro failure domains based on the
OpenStackDataPlaneNodeSetCRD. If one or more node sets fail, the other node sets run to completion because there is no interdependency between node sets. - Faster deployment times compared to previous RHOSP versions.
-
Highly configurable data plane setup based on the
OpenStackDataPlaneNodeSetandOpenStackDataPlaneServiceCRDs.
-
The
- Documentation
In RHOSO 18.0 (GA), the documentation library has been restructured to align with the user lifecycle of RHOSO. Each guide incorporates content from one or more product areas that work together to cover end-to-end tasks. The titles are organized in categories for each stage in the user lifecycle of RHOSO.
The following categories are published with RHOSO 18.0:
Plan: Information about the release, requirements, and how to get started before deployment. This category includes the following guides:
- Release notes
- Planning your deployment
- Integrating partner content
Prepare, deploy, configure, test: Procedures for deploying an initial RHOSO environment, customizing the control plane and data plane, configuring validated architectures, storage, and testing the deployed environment. This category includes the following guides:
- Deploying Red Hat OpenStack Services on OpenShift
- Customizing the Red Hat OpenStack Services on OpenShift deployment
- Deploying a Network Functions Virtualization environment
- Deploying a hyper-converged infrastructure environment
- Configuring persistent storage
- Validating and troubleshooting the deployed cloud
Adopt and update: Information about performing minor updates to the latest maintenance release of RHOSO, and procedures for adopting a Red Hat OpenStack Platform 17.1 cloud. This category includes the following guides:
- Adopting a Red Hat OpenStack Platform 17.1 overcloud to a Red Hat OpenStack Services on OpenShift 18.0 data plane
- Updating your environment to the latest maintenance release
Customize and scale: Procedures for configuring and customizing specific components of the deployed environment. These procedures must be done before you start to operate the deployment. This category includes the following guides:
- Configuring the Compute service for instance creation
- Configuring data plane networking
- Configuring load balancing as a service
- Customizing persistent storage
- Configuring security services
- Auto-scaling for instances
Manage resources and maintain the cloud: Procedures that you can perform during ongoing operation of the RHOSO environment. This category includes the following guides:
- Maintaining the Red Hat OpenStack Services on OpenShift deployment
- Creating and managing instances
- Performing storage operations
- Performing security operations
- Managing networking resources
- Managing cloud resources with the Dashboard
- Monitoring high availability services
- Documentation in progress
In RHOSO 18.0 (GA), the following titles are being reviewed and will be published asynchronously:
- Configuring the Bare Metal Provisioning service
- Configuring load balancing as a service (Technology Preview)
- RHOCP feature documentation
- Starting with RHOSO 18.0 (GA), features that are supported and managed natively in RHOCP are documented in the RHOCP documentation library. The RHOSO documentation includes links to relevant RHOCP documentation where needed.
- Earlier documentation versions
- The RHOSO documentation page shows documentation for version 18.0 and later. For earlier supported versions of RHOSP, see Product Documentation for Red Hat OpenStack Platform 17.1.
- High availability
- High availability managed natively in RHOCP: Starting with RHOSO 18.0 (GA), RHOSO high availability (HA) uses RHOCP primitives instead of RHOSP services to manage failover and recovery deployment.
- Networking, Egress QoS support at NIC level using DCB (Technology Preview)
Starting with RHOSO 18.0 (GA), egress quality of service (QoS) at the network interface controller (NIC) level uses the Data Center Bridging Capability Exchange (DCBX) protocol to configure egress QoS at the NIC level in the host. It triggers the configuration and provides the information directly from the top of rack (ToR) switch that peers with the host NIC. This capability, combined with egress QoS for OVS/OVN, enables end-to-end egress QoS.
This is a Technology Preview feature. A Technology Preview feature might not be fully implemented and tested. Some features might be absent, incomplete, or not work as expected.
For more information on this feature, see Feature Integration document - DCB for E2E QoS.
- Configuring and deploying networking with Kubernetes NMState Operator and the RHEL NetworkManager service (Technology preview)
-
Starting with RHOSO 18.0 (GA), the RHOSO bare-metal network deployment uses
os-net-configwith a Kubernetes NMState Operator and NetworkManager back end. Therefore, administrators can use the Kubernetes NMState Operator,nmstate, and the RHEL NetworkManager CLI toolnmclito configure and deploy networks on the data plane, instead of legacyifcfgfiles andnetwork-init-scripts. - NFV
-
Power optimization enhancements: RHOSO 18.0 (GA) features a Tuned power saving profile,
cpu-partitioning-powersave. You can use this profile to improve CPU power consumption by shutting down idle CPU cores or associated sub-systems. Additionally, support for adaptive nano sleep enables power saving for low packet rates.
-
Power optimization enhancements: RHOSO 18.0 (GA) features a Tuned power saving profile,
- Observability
Enhanced Openstack Observability:
-
In RHOSO 18.0 (GA), enhanced dashboards provide unified observability with visualizations that are natively integrated into the RHOCP Observability UI. These include the
node_exporteragent that exposes metrics to the Prometheus monitoring system. -
In RHOSO 18.0 (GA), the
node_exporteragent replaces thecollectddaemon, and Prometheus replaces the Time series database (Gnocchi).
-
In RHOSO 18.0 (GA), enhanced dashboards provide unified observability with visualizations that are natively integrated into the RHOCP Observability UI. These include the
- Logging: In RHOSO 18.0 (GA), the OpenStack logging capability is significantly enhanced. You can now collect logs from the control plane and Compute nodes, and use RHOCP Logging to store them in-cluster via Loki log store or forward them off-cluster to an external log store. Logs that are stored in-cluster with Loki can be visualized in the RHOCP Observability UI console.
- Service Telemetry Framework deprecation: The Observability product for previous versions of RHOSP is Service Telemetry Framework (STF). With the release of RHOSO 18.0 (GA), STF is deprecated and in maintenance mode. There are no feature enhancements for STF after STF 1.5.4, and STF status reaches end of life at the end of the RHOSP 17.1 lifecycle. Maintenance versions of STF will be released on new EUS versions of RHOCP until the end of the RHOSP 17.1 lifecycle.
- Security
FIPS enabled by default:
- Starting with RHOSO 18.0 (GA), Federal Information Processing Standard (FIPS) is enabled by default when RHOSO is installed on a FIPS enabled RHOCP cluster in new deployments.
- You do not enable or disable FIPS in your RHOSO configuration. You control the FIPS state in the underlying RHOCP cluster.
- TLS-everywhere enabled by default: In RHOSO 18.0 (GA), after deployment, you can configure public services with your own certificates. You can deploy without TLS-everywhere and enable it later. You cannot disable TLS-everywhere after you enable it.
- Secure RBAC enabled by default: The Secure Role-Based Access Control (RBAC) policy framework is enabled by default in RHOSO 18.0 (GA) deployments.
- Key Manager (barbican) enabled by default: The Key Manager is enabled by default in RHOSO 18.0 (GA) deployments.
- Storage
- Integration with external Red Hat Ceph Storage (RHCS) 7 clusters: You can integrate RHOSO 18.0 (GA) with external RHCS 7 clusters to include RHCS capabilities with your deployment.
- Distributed image import: RHOSO 18.0 (GA) introduces distributed image import for the Image service (glance). With this feature, you do not need to configure a shared staging area for different API workers to access images that are imported to the Image service. Now the API worker that owns the image data is the same API worker that performs the image import.
- Block Storage service (cinder) backup and restore for thin volumes: Starting with RHOSO 18.0 (GA), the backup service for the Block Storage service service preserves sparseness when restoring a backup to a new volume. This feature ensures that restored volumes use the same amount of storage as the backed up volume. It does not apply to RBD backups, which use a different mechanism to preserve sparseness.
- Support for RHCS RBD deferred deletion: RHOSO 18.0 (GA) introduces Block Storage service and Image service RBD deferred deletion, which improves flexibility in the way RBD snapshot dependencies are managed. With deferred deletion, you can delete a resource such as an image, volume, or snapshot even if there are active dependencies.
- Shared File Systems service (manila) CephFS NFS driver with Ganesha Active/Active: In RHOSO 18.0 (GA), the CephFS-NFS driver for the Shared File Systems service consumes an active/active Ganesha cluster by default, improving both the scalability and high availability of the Ceph NFS service.
-
Unified OpenStack client parity with native Shared File Systems service client: Starting with RHOSO 18.0 (GA), the Shared File Systems service fully supports the
openstackclient command line interface.
Chapter 2. Release information RHOSO 18.0 Copy linkLink copied to clipboard!
These release notes highlight selected updates in some or all of the Red Hat Services on OpenShift (RHOSO) components. Consider these updates when you deploy this release of RHOSO. Each of the notes in this section refers to the Jira issue used to track the update. If the Jira issue security level is public, you can click the link to see the Jira issue. If the security level is restricted, the Jira issue ID does not have a link to the Jira issue.
2.1. Release information RHOSO 18.0.17 Copy linkLink copied to clipboard!
Review changes for the new release of Red Hat OpenStack Services on OpenShift (RHOSO).
2.1.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2026:4934
- Release of components for RHOSO 18.0.17
- RHSA-2026:4936
- Release of containers for RHOSO 18.0.17 security update
2.1.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.2.1. New features Copy linkLink copied to clipboard!
Understand the new features and major enhancements introduced in RHOSO 18.0.17 before you deploy the release.
- Enable multiple connections per live migration
By default, live migration uses one network connection to transfer an instance to a destination Compute node. Factors such as single-threaded TLS speed can limit performance. To increase migration speed, enable multiple connections per live migration.
- Replace an NVMe device without interrupting running instances
You can physically replace an NVMe device that uses PCI passthrough without interrupting running instances or affecting the data plane host.
2.1.2.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
notificationsBusInstanceconfiguration for RabbitMqCluster CR causes service downtimeDue to a bug in nova-operator, if
notificationsBusInstanceis configured in thenovasection of theOpenStackControlPlanecustom resource (CR) to point to a RabbitMqCluster CR, and a pod in that cluster is restarted, then nova-operator reconfigures all Compute services twice. This results in unnecessary service downtime.Temporary workaround:
-
Remove the
notificationsBusInstanceconfiguration from theOpenStackControlPlaneCR. Do not remove the notification RabbitMqCluster definition from theOpenStackControlPlaneCR. Gather the
transport_urlfrom the RabbitMqCluster:$ oc get secret <name-rabbitmq-notifications>-default-user -o json | jq '.data | map_values(@base64d) | "rabbit://\(.username):\(.password)@\(.host):\(.port)/?ssl=1"'-
Replace
<name-rabbitmq-notifications>with the name of the RabbitMqCluster for notifications.
-
Replace
For each
novaservice in theOpenStackControlPlaneCR, add the following to thecustomerServiceConfigfield:[oslo_messaging_notifications] transport_url = <transport_url value from the previous step> driver = messagingv2[notifications] notify_on_state_change = vm_and_task_state notification_format=bothFor each
novaOpenStackDataPlaneServiceCR, add the above config snippet to the related nova extra config map and then create anOpenStackDataPlaneDeploymentCR to apply the config changes on the data plane nodes.This makes the notification message bus configuration in nova static. If the RabbitMqCluster is changed in a way that affects the effective
transport_urlof the cluster, then you must perform the above nova configuration procedure again.The
customServiceConfigstores the configuration in plain text, and thetransport_urlcontains the user and password of the RabbitMqCluster. Applying this workaround decreases the security of the notification RabbitMqCluster.
-
Remove the
- Subset of image properties not being used in image properties weigher
The image properties weigher does not take the
os_versionoros_admin_userproperties into account when calculating the raw weights when these properties are included in the configuration.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including those used by instances. If thecpu_statestrategy is used, the CPUs of those instances become unpinned.
2.1.3. Control plane Copy linkLink copied to clipboard!
Understand the Control Plane updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.3.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.17 before you deploy the release.
- Empty node sets excluded from minor update process
Before this update, an empty node set prevented the completion of the minor update process. With this update, an empty node set no longer prevents the minor update process from completing successfully.
- Route status included in
OpenStackControlPlaneCR conditions list Before this update, when a control plane route failed it was not included in the
OpenStackControlPlaneCR conditions list, which made it difficult to identify the root cause. With this update, theOpenStackControlPlaneCR conditions list has been extended to include the status of the routes.
2.1.4. Hardware Provisioning Copy linkLink copied to clipboard!
Understand the Hardware Provisioning updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.4.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
- Error replacing unprovisioned nodes
Red Hat OpenStack Services on Openshift (RHOSO) uses
metal3for provisioning unprovisioned data plane nodes. An error state occurs when you must replace a node where thebootMacAddresscannot be updated. The result is that the node is stuck in a state where it must be completely removed from the deployment and provisioned as if it is a new node. If theautomatedCleaningModeattribute is setautomatedCleaningMode: disabled, this error state does not occur.Workaround: When provisioning unprovisioned data plane nodes, ensure the
automatedCleaningModeattribute is set toautomatedCleaningMode:disabled.
2.1.5. High availability Copy linkLink copied to clipboard!
Understand the High Availability updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.5.1. New features Copy linkLink copied to clipboard!
Understand the new features and major enhancements introduced in RHOSO 18.0.17 before you deploy the release.
- Quorum queues refactoring
By default, RabbitMQ clusters use
Quorumqueues for new RHOSO deployments to provide increased data safety and high availability at the expense of a slight increase in latency.WarningYou must not configure the RabbitMQ clusters of an existing RHOSO deployment to use
Quorumqueues. If you do so then your existing RHOSO services will not start or work properly.
2.1.5.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.17 before you deploy the release.
- HA services and RabbitMQ improvements
Starting with RHOSO 18.0.17 (Feature Release 5), you can improve the high availability of services by configuring them to better deal with temporary RabbitMQ issues by talking to each RabbitMQ pod individually.
To do so, you can configure the RabbitMQ cluster to expose each pod individually using a dedicated IP address by adding a
podOverridesection as shown in the following example:rabbitmq: delayStartSeconds: 30 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer podOverride: services: - metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.81 spec: type: LoadBalancer - metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.82 spec: type: LoadBalancer - metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.83 spec: type: LoadBalancer persistence: storage: 10GiAfter applying this change to the control plane you must redeploy the data plane to update all the Compute nodes.
2.1.6. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.6.1. New features Copy linkLink copied to clipboard!
Understand the new features and major enhancements introduced in RHOSO 18.0.17 before you deploy the release.
- Create network alerts
Starting with RHOSO 18.0.17 (Feature Release 5), you can create and manage alerts to notify you of specific conditions in your RHOSO networking operations.
For more information, see Networking alerts in Configuring observability.
2.1.6.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
- Nmstate migration does not support
switchdevmode You cannot migrate from the
ifcfgprovider ofos-net-configto the Nmstate provider withswitchdevmode enabled.Workaround: disable
switchdevmode before migrating to the Nmstate provider.
2.1.7. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.7.1. New features Copy linkLink copied to clipboard!
Understand the new features and major enhancements introduced in RHOSO 18.0.17 before you deploy the release.
- Adopt a dynamic routing environment to RHOSO
You can now upgrade in-place an OpenStack deployment from RHOSP 17.1 to RHOSO 18 by deploying a parallel control plane based on RHOSO 18 and then pointing the compute nodes to the new control plane. For more information, see Adopting a Red Hat OpenStack Platform 17.1 deployment.
- Adopt a Load-balancing service to a RHOSO environment
You can now adopt a director-deployed Load-balancing service (Octavia) control plane service from Red Hat OpenStack Platform 17.1 to Red Hat OpenStack Services on OpenShift (RHOSO) 18.0. This feature allows you to preserve existing configurations and to ensure seamless operation in the new RHOSO environment.
For more information about adopting the Load-balancing service, see Adopting the load-balancing service in Adopting a Red Hat OpenStack Platform 17.1 deployment.
2.1.7.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.17 before you deploy the release.
- Fixes log failure that happened when using
neutron-fwaasdriver and log API plugins in same neutron deployment Before this update, when deploying RHOSO with both the
neutron-fwaasdriver and log API plugins, end users experienced log creation failure. The issue is fixed and you can deploy with both theneutron-fwaasdriver and log API plugins in the same neutron deployment. Note that Firewall as a Service is a Technology Preview in RHOSO 18.0.17 Feature Release 5.
2.1.7.3. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews introduced in RHOSO 18.0.17 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Data plane adoption support for Designate Technology Preview
In RHOSO 18.0.17 (Feature Release 5), you can test a Technology Preview of data plane adoption of the DNS service (designate). Do not use Technology Preview features in production environments.
2.1.7.4. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
neutron-ovn-db-sync-utilsync tool fails on SB DBThere is a known issue where the OVN database sync tool,
neutron-ovn-db-sync-util, always reports the following warning:ValueError: driver cannot be None. This warning is caused by a regression introduced in RHOSO 18.0.17 that prevents the OVN database sync tool from syncing the OVN southbound database (SB DB).Workaround: execute the sync tool on the OVN northbound database only by using the argument,
--sync_plugin neutron_nb_sync:Example
$ oc rsh -n openstack -c neutron-api deploy/neutron \ neutron-ovn-db-sync-util --config-file \ /usr/share/neutron/neutron-dist.conf --config-file \ /etc/neutron/neutron.conf --config-dir /etc/neutron/neutron.conf.d \ --ovn-neutron_sync_mode=log --debug --sync_plugin neutron_nb_syncThis workaround does not execute the sync tool for the SB DB. Therefore, the following checks are not executed:
- Sync the hosts (chassis) and update the segments host mappings.
-
Re-schedule the unhosted gateway,
Logical_Router_Ports. - Sync the placement API resources.
neutron-ovn-db-sync-utilsync tool fails with FWaaSThere is a known issue where the OVN database sync tool,
neutron-ovn-db-sync-util, reports the following error and then crashes:No providers specified for 'FIREWALL_V2' service, exiting. This warning is caused by a regression introduced in RHOSO 18.0.17.Workaround: you can workaround this problem by either specifying the value
FIREWALL_V2in theservice_providersconfiguration variable, or by running the required plug-ins separately:Example
$ oc rsh -n openstack -c neutron-api deploy/neutron \ neutron-ovn-db-sync-util --config-file \ /usr/share/neutron/neutron-dist.conf --config-file \ /etc/neutron/neutron.conf --config-dir /etc/neutron/neutron.conf.d \ --ovn-neutron_sync_mode=log --sync_plugin neutron_nb_sync --debugThis workaround does not execute the sync tool for the SB DB. Therefore, the following checks are not executed:
- Sync the hosts (chassis) and update the segments host mappings.
-
Re-schedule the unhosted gateway,
Logical_Router_Ports. - Sync the placement API resources.
- DNS service unsupported in dynamic routing environments
In RHOSO 18.0, the RHOSO DNS service (designate) is unsupported in dynamic routing environments that use BGP. The
designate-workerpods cannot configuredesignate-backendbind9pods and, as a consequence, the DNS service does not work properly. The cause for this is that the worker pods try to connect to the back-end BIND 9 pods by using their common network attachment definition, with a common subnet range, which does not work properly when BGP is configured, because pods should use different subnet ranges to connect to each other. Workaround: there is no known workaround.
- FWaaS rules are malfunctioning
There is a known issue where network traffic is erroneously coming in and out when only one firewall rule is set. This problem can occur when using Firewall as a service (FWaaS) and enabling security groups. FWaaS should only allow incoming or outgoing traffic when two rules are set.
Workaround: currently, there is no workaround.
2.1.8. Observability Copy linkLink copied to clipboard!
Understand the Observability updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.8.1. New features Copy linkLink copied to clipboard!
Understand the new features and major enhancements introduced in RHOSO 18.0.17 before you deploy the release.
- Chargeback and rating for RHOSO clouds with the Rating service (cloudkitty)
RHOSO administrators can enable the Rating service (cloudkitty) in the Telemetry Operator to provide chargeback and rating capabilities to RHOSO clouds.
The Rating service collects data from the RHOSO Telemetry service to generate cloud resource usage through rating reports. These reports can be consumed by external financial operations or billing systems. The Rating service does not provide a billing interface.
2.1.8.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.17 before you deploy the release.
- The default rating metrics to collect were configured for an unsupported collector
The
metrics.yamlconfiguration file defines the rating metrics to collect for a RHOSO cloud. Before this update, the Rating service did not work because the defaultmetrics.yamlconfiguration file was configured for the Gnocchi collector, which is not supported. With this update, the defaultmetrics.yamlfile is updated to define the metrics that are collected by the supported collector, Prometheus.
- Cloud admin received empty summaries due to a hard coded key
Before this update, the cloud admin would receive empty summaries from the Summary API. This was because the key used by the Summary API was hard coded to the string "project_id" that did not match the default key "project". With this update, the key is now configurable, enabling the cloud admin to specify the key to use for generating summaries.
- Autoscaling controller was not recreating the
aodhpod in response to updated Aodh configuration Before this update, the
autoscalingcontroller was not reacting to changes to Alarming service (aodh) resources, such as a changed password or TLS certificate. This caused theaodhpod to continue using old configuration or data because theaodhpod had not been restarted to apply the changes. With this update, theautoscalingcontroller now watches for changes to the Alarming service and reacts to them by restarting theaodhpod as required to update the resources and consume the latest data.
- The Ceilometer service topology was not being applied to
mysqld-exporteror thekube-state-metricsagent Before this update, topology configured for the Ceilometer service was not applied to
mysqld-exporteror thekube-state-metricsagent. With this update, if you configure a topology for the Ceilometer service, it will be applied tomysqld-exporterand thekube-state-metricsagent.
- Incorrect
servicelabel applied to Telemetry resources Before this update, topology configured for the Ceilometer service were not being applied because the
servicelabel was being assigned the incorrect value of "metricStorage". With this update, theservicelabel of the Ceilometer resource is now set to the correct value.
- Prometheus now scrapes metrics from all RabbitMQ nodes as expected
Before this update, Prometheus did not scrape metrics from all RabbitMQ nodes as expected. Instead, it scraped metrics only from one node at a time. The selection of that one node was not predictable.
Now, Prometheus scrapes metrics from all RabbitMQ nodes as expected.
2.1.8.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
- Metrics not being scraped from the Ceilometer compute agent on data plane nodes
In RHOSO 18.0.17, the Ceilometer compute agent,
ceilometer_agent_compute, fails to start because the data plane does not provision TLS certificates for the Ceilometerprometheus_exporter. This results in Prometheus being unable to scrape metrics from the Ceilometer compute agent on data plane nodes.Workaround: Configure the missing property as
customServiceConfigin theOpenStackControlPlaneCR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: telemetry: template: ceilometer: customServiceConfig: | [service_credentials] cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
2.1.9. Optimize Service Copy linkLink copied to clipboard!
Understand the Optimize service updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.9.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
- OpenStack Optimize service not supported in multi-RHEL mode
The OpenStack Optimize service is not supported in multi-RHEL (Red Hat Enterprise Linux) mode. Running the Optimize service in multi-RHEL mode leads to service disruption, specifically on Compute nodes that are running on RHEL 9.2 and RHEL 9.4.
- Workflow Engine does not revert actions for failed Action Plans
In RHOSO 18, the Optimize service (watcher) Engine does not automatically revert failed actions when an Action Plan fails, when configured to do so by enabling the
watcher_applier.rollback_when_actionplan_failedconfiguration option.Workaround: Manually revert each failed action in the Action Plan. To avoid the rollback, you can diagnose and fix the root cause of the failure and then run the Audit again to propose a new solution.
2.1.10. Security and hardening Copy linkLink copied to clipboard!
Understand the Security and Hardening updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.10.1. New features Copy linkLink copied to clipboard!
Understand the new features and major enhancements introduced in RHOSO 18.0.17 before you deploy the release.
- Single Keystone Multiple OpenStacks deployments
The Single Keystone Multiple OpenStacks (SKMO) multi-region deployment of RHOSO simplifies user management and configuration for multiple regions.
In standard multi-region RHOSO deployments, each region is isolated with its own Identity (keystone) and Dashboard (horizon) services. This requires separate user accounts for each region, making credential management and rotation difficult.
A multi-region Single Keystone Multiple OpenStacks (SKMO) deployment of RHOSO administrators creates a centralized Dashboard (horizon) and Identity (keystone) service to provide a
single pane of glassfor the simplified configuration and management of the users. Each end user has a single set of credentials and their access to everyworkloadregion can be enabled or disabled in thecentralregion.For more information about SKMO deployments, see Single Keystone Multiple OpenStacks deployments in Configuring security services.
2.1.10.2. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews introduced in RHOSO 18.0.17 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Data plane adoption support for Key Manager service with Proteccio HSM integration
In RHOSO 18.0.17 (Feature Release 5), you can test a Technology Preview of the data plane adoption of the Key Manager service (barbican) with Proteccio hardware security module (HSM) integration. Do not use Technology Preview features in production environments.
- Near Zero Downtime Password Rotation (nZDPR)
In RHOSO 18.0.17 (Feature Release 5), a technology preview is available for near Zero Downtime Password Rotation (nZDPR) that eliminates service interruptions typically caused by credential updates in RHOSO. While standard user credential authentication often fails during propagation, nZDPR utilizes application credentials that provides a configurable grace period. This allows both old and new credentials to remain valid simultaneously during the transition, ensuring continuous service availability across the control plane and the data plane while maintaining strict security compliance.
2.1.11. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.17 before you deploy the release.
2.1.11.1. New features Copy linkLink copied to clipboard!
Understand the new features and major enhancements introduced in RHOSO 18.0.17 before you deploy the release.
- Red Hat Ceph Storage 9 support
This enhancement adds support for integration with external Red Hat Ceph Storage 9.
For active/active (A/A) CephFS-NFS deployments, reserve at least one standby node in your Ceph Storage cluster for successful failover. For environments that require automated CephFS-NFS recovery without additional standby capacity, use active/passive (A/P) mode. A/A failover recovery will be enhanced in Red Hat Ceph Storage 9.1.
If you use CephFS-NFS and plan to upgrade from Red Hat Ceph Storage 8.x to 9.0.1, see the Known Issues section.
- iSCSI and multipath services now run on data plane host
The iSCSI daemon (
iscsid) and multipath daemon (multipathd) now run directly on data plane host systems instead of in containers. This change enables additional capabilities such as booting from iSCSI disks. iSCSI multipath functionality continues to work as before for both new and existing deployments.
- FC-based drivers for Block Storage service (cinder) are fully supported
The Red Hat OpenStack Services on OpenShift adoption now supports services that are configured to use the Block Storage service as a back end with FC. For more information about Block Storage service adoption, see RHOCP preparation for Block Storage service adoption in Adopting a Red Hat OpenStack Platform 17.1 deployment.
- RHOSO 18 adoption for RHOSP 17.1 DCN
Red Hat OpenStack Platform (RHOSP) 17.1 Distributed Compute Node (DCN) deployments can now adopt to RHOSO 18. The adoption procedure maps TripleO stacks to data plane node sets with site-specific network configurations. This update supports DCN deployments that have edge sites with Red Hat Ceph Storage or edge sites without storage.
- Custom default retention for FlexVol shares using NetApp ONTAP driver
With this update, administrators can configure a default retention period for shares when using the NetApp ONTAP driver with share server management (
driver_handles_share_serversset toTrue) in the Shared File Systems service (manila).ONTAP firmware version 9.11 and later enables volume retention by default, which might not suit all cloud use cases. Use the
netapp_delete_retention_hoursconfiguration option to specify how long in hours a deleted share remains on the storage system before being purged. Set the configuration option to 0 to disable retention. This option only applies when the Shared File Systems service manages the share server life cycle.
2.1.11.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.17 before you deploy the release.
- Quota overflow detection prevents database errors during volume creation
Before this update, setting a quota limit to -1 (unlimited) and attempting to create a large volume caused quota reservation overflow due to database integer limits. This prevented users from creating new volumes due to database errors. With this update, the Block Storage service (cinder) includes quota overflow detection to prevent database errors and allow volume operations to complete successfully.
- NFS driver now correctly manages volume resize operations with snapshots
Before this update, when you resized an NFS volume that had snapshots, the operation failed to update the virtual size of the active image file, causing a discrepancy between the Block Storage service (cinder) database and the actual image size. This update ensures that the NFS driver updates both the Block Storage service database and the virtual size of the active image during resize operations with snapshots.
- Image ownership unchanged when accepting shared images
Before this update, using
openstack image set --project <project_id> --accept <image_id>to accept a shared image incorrectly transferred image ownership to the accepting project. The original image owner lost access to the image. With this update, accepting shared images only updates the membership status and preserves the original image ownership.
- Image service correctly parses forward slashes in S3 secret keys during credential rotation
Before this update, credential rotation could produce corrupted URLs in the database when AWS secret access keys contained forward slashes (/). The URL parser misinterpreted forward slashes as path delimiters, causing S3 image downloads to fail with authentication errors. With this update, the Image service (glance) correctly parses these URLs.
- Repeated instance starts with iSCSI multipath storage now succeed
Before this update, when you repeatedly stopped and started instances on data plane nodes with iSCSI multipath storage, instance starts frequently failed. This issue occurred because of a race condition between volume disconnect and reconnect operations. This update improves multipath device management during these operations, and instance starts now complete successfully when using iSCSI multipath-enabled storage.
- Destination device check before
qemu-img convert Before this update, if a block device was unavailable in
/dev/, the disk image conversion utility (qemu-img convert) would create and write data without checking if the destination device existed. This resulted in a file being written to an unavailable device, leading to data loss. This update introduces a check for the existence of the destination device before runningqemu-img convert, preventing file creation in/dev/when a block device is unavailable.
- RHEL
system.devicesfile for LVM device management Before this update, the RHEL
system.devicesfile that is used for Logical Volume Management (LVM) device management in the RHOSO 18 data plane was not always present, and the file must be present for LVM filtering to mask Block Storage service (cinder) volumes. With this update, thesystem.devicesfile is created if it is absent.
- Accurate quota management in the Block Storage service
Before this update, the
cinder-manage quota checkandcinder-manage quota synccommands failed when noproject-idargument was specified, preventing accurate quota management in the Block Storage service (cinder). This update corrects the error in quota management commands, ensuring accurate management of quota usage, even when noproject-idargument is specified.
2.1.11.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
- CephFS-NFS service unavailable after upgrading from Red Hat Ceph Storage 8.x to 9.0.1
Upgrading from RHCS 8.x to RHCS 9.0.1 causes the CephFS-NFS service to fail to start after the upgrade completes, making it unavailable to all clients. If you use CephFS-NFS with RHCS 8.x in your RHOSO deployment, delay upgrading RHCS until a fix is available.
- Image service uploads image that exceeds size quota before rejecting further uploads
When you upload an image to the Image service (glance) that is larger than the configured image size limit (
image_size_total), the upload succeeds because the Image service does not verify the image size before upload. When the image is uploaded and stored, the Image service determines the image size, which might exceed the quota. However, the Image service rejects any subsequent uploads because the quota is exceeded.
2.1.12. Upgrades and updates Copy linkLink copied to clipboard!
Understand the Upgrades and Updates changes introduced in RHOSO 18.0.17 before you deploy the release.
2.1.12.1. New features Copy linkLink copied to clipboard!
Understand the new features and major enhancements introduced in RHOSO 18.0.17 before you deploy the release.
- Full support for separate OpenStack and RHEL RPM updates
Updating your OpenStack service containers and Red Hat Enterprise Linux (RHEL) packages separately provides you with more control over the update process. You can identify and troubleshoot issues more easily by isolating issues to specific packages, and reduce downtime during the update.
You can update your Red Hat OpenStack Services on OpenShift (RHOSO) Compute nodes that are running RHEL 9.2, 9.4, or 9.6 in the following phases:
- Update OpenStack service containers and required RPM dependencies
Update system-related RPM packages
To update your RHOSO environment to the latest minor version, see Updating your environment to the latest maintenance release.
- Kernel live patching for RHOSO environments is fully supported
Kernel live patching (
kpatch) for RHOSO environments is now generally available. Previously, thekpatchinstructions were provided as a Technology Preview.With this feature, you can apply critical security updates and bug fixes to the kernel without requiring a system reboot. You cannot use this feature to apply custom live patches or third-party live patching solutions.
For more information about enabling
kpatch, see Enabling kernel live patching on Compute nodes in Updating your environment to the latest maintenance release.
2.1.12.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.17 before you deploy the release.
- Minor updates no longer hang if OVN is disabled
The update process now accommodates network controller integrations that do not use OVN. If OVN is disabled, the minor update process continues without waiting for OVN to be deployed to the data plane.
2.2. Release information RHOSO 18.0.16 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.16 updates introduced in RHOSO 18.0.16 before you deploy the release.
2.2.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHSA-2026:1958
- Important: Red Hat OpenStack Services on OpenShift 18.0 (openstack-keystone) security update
- RHSA-2026:1959
- Moderate: Red Hat OpenStack Services on OpenShift 18.0 (python-eventlet) security update
- RHBA-2026:1960
- Release of components for RHOSO 18.0.16
2.2.2. Observability Copy linkLink copied to clipboard!
Understand the Observability updates introduced in RHOSO 18.0.17 before you deploy the release.
2.2.3. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.17 before you deploy the release.
2.2.3.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
notificationsBusInstanceconfiguration for RabbitMqCluster CR causes service downtimeDue to a bug in nova-operator, if
notificationsBusInstanceis configured in thenovasection of theOpenStackControlPlanecustom resource (CR) to point to a RabbitMqCluster CR, and a pod in that cluster is restarted, then nova-operator reconfigures all Compute services twice. This results in unnecessary service downtime.Temporary workaround: 1. Remove the
notificationsBusInstanceconfiguration from theOpenStackControlPlaneCR. Do not remove the notification RabbitMqCluster definition from theOpenStackControlPlaneCR. 2. Gather thetransport_urlfrom the RabbitMqCluster:+
$ oc get secret <name-rabbitmq-notifications>-default-user -o json | jq '.data | map_values(@base64d) | "rabbit://\(.username):\(.password)@\(.host):\(.port)/?ssl=1"'+
Replace
<name-rabbitmq-notifications>with the name of the RabbitMqCluster for notifications.For each
novaservice in theOpenStackControlPlaneCR, add the following to thecustomerServiceConfigfield:[oslo_messaging_notifications] transport_url = <transport_url value from the previous step> driver = messagingv2[notifications] notify_on_state_change = vm_and_task_state notification_format=bothFor each
novaOpenStackDataPlaneServiceCR, add the above config snippet to the related nova extra config map and then create anOpenStackDataPlaneDeploymentCR to apply the config changes on the data plane nodes.This makes the notification message bus configuration in nova static. If the RabbitMqCluster is changed in a way that affects the effective
transport_urlof the cluster, then you must perform the above nova configuration procedure again.The
customServiceConfigstores the configuration in plain text, and thetransport_urlcontains the user and password of the RabbitMqCluster. Applying this workaround decreases the security of the notification rabbitmq cluster.
- Subset of image properties not being used in image properties weigher
The image properties weigher does not take the
os_versionoros_admin_userproperties into account when calculating the raw weights when these properties are included in the configuration.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including those used by instances. If thecpu_statestrategy is used, the CPUs of those instances become unpinned.
2.2.4. Hardware Provisioning Copy linkLink copied to clipboard!
Understand the Hardware Provisioning updates introduced in RHOSO 18.0.17 before you deploy the release.
2.2.4.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
- Error replacing unprovisioned nodes
Red Hat OpenStack Services on Openshift (RHOSO) uses
metal3for provisioning unprovisioned dataplane nodes. An error state occurs when you must replace a node where thebootMacAddresscannot be updated. The result is that the node is stuck in a state where it must be completely removed from the deployment and provisioned as if it is a new node. If theautomatedCleaningModeattribute is setautomatedCleaningMode: disabled, this error state does not occur.Workaround: When provisioning unprovisioned dataplane nodes, ensure the
automatedCleaningModeattribute is set toautomatedCleaningMode:disabled.
2.2.5. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.17 before you deploy the release.
2.2.6. Control plane Copy linkLink copied to clipboard!
Understand the Control Plane updates introduced in RHOSO 18.0.17 before you deploy the release.
2.2.6.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine instance created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
2.2.7. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.17 before you deploy the release.
2.2.7.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
- Image service uploads image that exceeds size quota before rejecting further uploads
When you upload an image to the Image service (glance) that is larger than the configured image size limit (
image_size_total), the upload succeeds because the Image service does not verify the image size before upload. When the image is uploaded and stored, the Image service determines the image size, which might exceed the quota. However, the Image service rejects any subsequent uploads because the quota is exceeded.
2.2.8. Optimize Service Copy linkLink copied to clipboard!
Understand the Optimize service updates introduced in RHOSO 18.0.17 before you deploy the release.
2.2.8.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues that are present in RHOSO 18.0.17 before you deploy the release.
- Workflow Engine does not revert actions for failed Action Plans
In RHOSO 18, the Optimize service (watcher) Engine does not automatically revert failed actions when an Action Plan fails, when configured to do so by enabling the
watcher_applier.rollback_when_actionplan_failedconfiguration option.Workaround: Manually revert each failed action in the Action Plan. To avoid the rollback, you can diagnose and fix the root cause of the failure and then run the Audit again to propose a new solution.
- OpenStack Optimize service not supported in multi-RHEL mode
The OpenStack Optimize service is not supported in multi-RHEL (Red Hat Enterprise Linux) mode. Running the Optimize service in multi-RHEL mode leads to service disruption, specifically on Compute nodes that are running on RHEL 9.2 and RHEL 9.4.
2.3. Release information RHOSO 18.0.15 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.15 updates introduced in RHOSO 18.0.15 before you deploy the release.
2.3.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:23151
- Release of components for RHOSO 18.0.15
- RHBA-2025:23179
- Release of containers for RHOSO 18.0.15
2.3.2. Observability Copy linkLink copied to clipboard!
Understand the Observability updates introduced in RHOSO 18.0.15 before you deploy the release.
2.3.2.1. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.15 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Chargeback and rating for RHOSO clouds with the Rating service (cloudkitty)
RHOSO administrators can evaluate the chargeback and rating capabilities of RHOSO clouds by enabling the Rating service (cloudkitty) in the Telemetry Operator.
The Rating service collects data from the RHOSO Telemetry service to generate cloud resource usage through rating reports. These reports can be consumed by external financial operations or billing systems. The Rating service does not provide a billing interface.
2.3.2.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.15 before you deploy the release.
- Incomplete RabbitMQ cluster view
The problem occurs because a single RabbitMQ node is specifically selected in the scrape configuration, rather than the entire cluster. This leads to an incomplete RabbitMQ cluster view for clusters with multiple nodes, providing limited user insight. There is no workaround.
2.3.3. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.15 before you deploy the release.
2.3.3.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.15 before you deploy the release.
- Potential scaling issue in PCI in Placement
If there are many similar child providers defined under the same root provider, the allocation candidate generation algorithm in the Placement service scales poorly with the default configuration of placement.
For example, if a Compute node has 8 or more child resource providers, each providing one resource, and an instance requests 8 or more such resources each in independent request groups, then without further optimization enabled, the GET
allocation_candidatesquery takes too long to calculate and the scheduling of the instance will fail.In this situation, make the following configuration changes in the
OpenStackControlPlaneCR:spec: placement: template: customServiceConfig: | [workarounds] optimize_for_wide_provider_trees = True [placement] max_allocation_candidates = 1000 allocation_candidates_generation_strategy = breadth-first
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including those used by instances. If thecpu_statestrategy is used, the CPUs of those instances become unpinned.
- Subset of image properties not being used in image properties weigher
The image properties weigher does not take the
os_versionoros_admin_userproperties into account when calculating the raw weights when these properties are included in the configuration.
2.3.4. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.15 before you deploy the release.
2.3.4.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.15 before you deploy the release.
- The
edpm_bootstrapservice no longer fails when a data plane node is rebooted When a data plane node is rebooted, the
edpm_bootstrapservice no longer fails with the errorThe conditional check 'not boot_file_entry_present' failed.
2.3.5. Hardware Provisioning Copy linkLink copied to clipboard!
Understand the Hardware Provisioning updates introduced in RHOSO 18.0.15 before you deploy the release.
2.3.5.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.15 before you deploy the release.
- Error replacing unprovisioned nodes
Red Hat OpenStack Services on Openshift (RHOSO) uses
metal3for provisioning unprovisioned dataplane nodes. An error state occurs when you must replace a node where thebootMacAddresscannot be updated. The result is that the node is stuck in a state where it must be completely removed from the deployment and provisioned as if it is a new node. If theautomatedCleaningModeattribute is setautomatedCleaningMode: disabled, this error state does not occur.Workaround: When provisioning unprovisioned dataplane nodes, ensure the
automatedCleaningModeattribute is set toautomatedCleaningMode:disabled.
2.3.6. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.15 before you deploy the release.
2.3.6.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.15 before you deploy the release.
- Bond interfaces can now be configured in RHOSO 18
Before this update, bond interfaces configured in the
OpenStackControlPlanecustom resource (CR) for OVN gateways failed. Theovn-operatorwould throw an error similar to the following:… Path:"" ERRORED: error configuring pod [openstack/ovn-controller-ovs-5rjnm] networking: [openstack/ovn-controller-ovs-5rjnm/68204fb3-394d-4990-80d3-fc2e388c2ee3:datacentre]: error adding container to network "datacentre": failed to move link invalid argument ': StdinData: …With this update, the problem has been resolved, and the
ovn-operatorcorrectly creates bond interfaces on theovn-controllerandovn-controller-ovspods.
2.3.6.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.15 before you deploy the release.
- Log failure when using
neutron-fwaasdriver and log API plugins in same neutron deployment When deploying RHOSO with both the
neutron-fwaasdriver and log API plugins, end users might experience log creation failure. To work around this issue, do not use both theneutron-fwaasdriver and log API plugins in the same neutron deployment.
2.3.7. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.15 before you deploy the release.
2.3.7.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.15 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine instance created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
2.3.8. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.15 before you deploy the release.
2.3.8.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.15 before you deploy the release.
- Image service uploads image that exceeds size quota before rejecting further uploads
When you upload an image to the Image service (glance) that is larger than the configured image size limit (
image_size_total), the upload succeeds because the Image service does not verify the image size before upload. When the image is uploaded and stored, the Image service determines the image size, which might exceed the quota. However, the Image service rejects any subsequent uploads because the quota is exceeded.
- Commands for quota usage failing with errors
In RHOSO 18, the
cinder-manage quota checkandcinder-manage quota synccommands fail when noproject-idargument is specified, preventing accurate management of quota usage in the Block Storage service (cinder).Workaround: There is currently no workaround for this issue.
2.3.9. Optimize Service Copy linkLink copied to clipboard!
Understand the Optimize Service updates introduced in RHOSO 18.0.15 before you deploy the release.
2.3.9.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.15 before you deploy the release.
- Volume migration failures in watcher
Currently, if you attempt to migrate a volume when a volume type has a
volume_backend_nameparameter value that does not match the destination poolvolume_backend_nameparameter value, an error is raised.Workaround: Configure all volume types and cinder pools that will participate in volume migrations to a common value for the
volume_backend_name.
- Workflow Engine does not revert actions for failed Action Plans
In RHOSO 18, the Optimize service (watcher) Engine does not automatically revert failed actions when an Action Plan fails, when configured to do so by enabling the
watcher_applier.rollback_when_actionplan_failedconfiguration option.Workaround: Manually revert each failed action in the Action Plan. To avoid the rollback, you can diagnose and fix the root cause of the failure and then run the Audit again to propose a new solution.
2.4. Release information RHOSO 18.0.14 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.14 updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:20964
- Release of components for RHOSO 18.0.14 (Feature Release 4)
- RHSA-2025:21132
- Release of containers for RHOSO 18.0.14
2.4.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.2.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- Full support for adoption of Compute hosts with
/var/lib/nova/instanceson NFS Previously, this feature was available as Technology Preview. With this release, you can migrate Compute hosts with
/var/lib/nova/instanceson NFS as part of the adoption of a RHOSP 17.1 deployment.
2.4.2.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.14 before you deploy the release.
- Enhanced
os-vifOVS plugin improves network performance on OVS interfaces Previously, a bug fix was made to OVS that changed the application of the kernel’s default QOS policy to OVS ports. This fix was applied to 17.1 but not to 18.0. As a result, a regression in the network configuration for OVS interfaces negatively impacted network performance of Openstack instances when using kernel OVS. With this update, the
os-vifOVS plugin has been enhanced to improve network performance on OVS interfaces by using the linux-noop QOS policy by default. This can still be overridden by neutron QOS policies. To apply the update, restart or move the instance to recreate the port with a hard reboot, perform a detach followed by an attach operation, or perform a live migrate operation.
2.4.2.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.14 before you deploy the release.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including those used by instances. If thecpu_statestrategy is used, the CPUs of those instances become unpinned.
- Potential scaling issue in PCI in Placement
If there are many similar child providers defined under the same root provider, the allocation candidate generation algorithm in the Placement service scales poorly with the default configuration of placement.
For example, if a Compute node has 8 or more child resource providers, each providing one resource, and an instance requests 8 or more such resources each in independent request groups, then without further optimization enabled, the GET
allocation_candidatesquery takes too long to calculate and the scheduling of the instance will fail.In this situation, make the following configuration changes in the OpenStackControlPlane CR:
spec: placement: template: customServiceConfig: | [workarounds] optimize_for_wide_provider_trees = True [placement] max_allocation_candidates = 1000 allocation_candidates_generation_strategy = breadth-first
- Subset of image properties not being used in image properties weigher
The image properties weigher does not take the
os_versionoros_admin_userproperties into account when calculating the raw weights when these properties are included in the configuration.
2.4.3. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.3.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- DNS service (designate) now fully supported
In RHOSO 18.0.14, the DNS service (designate) is now fully supported. For more information, see Configuring DNS as a service.
- Avoiding taskflow interruptions by using flow resumption
In RHOSO 18.0.14, the Load-balancing service (octavia) flow resumption, which automatically reassigns the flow to an alternate controller if the original controller shuts down unexpectedly. For more information, see Avoiding taskflow interruptions by using flow resumption.
- Load-balancing service (octavia) support for DCN now fully supported
In RHOSO 18.0.14, creating load balancers in availability zones (AZs) are now fully supported. For more information, see Creating availability zones for load balancing of network traffic at the edge.
- OVN provider driver for Load-balancing service (octavia) now fully supported
In RHOSO 18.0.14, the OVN provider driver for the Load-balancing service is no longer a Technology Preview and is now fully supported. For more information, see Load-balancing service provider drivers.
- Collect OCP gateway,
ovn-controller, andovs-vswitchdmetrics on data plane nodes and control plane pods With this update, the Prometheus OVS/OVN exporter collects additional metrics related to OCP gateways,
ovn-controller, andovs-vswitchd. In addition, OVS/OVN metrics collection is now available on control plane pods. Previously, collection was only available on data plane nodes.For more information, see Network observability.
- Collect
northdand RAFT metrics on data plane nodes and control plane pods With this update, the Prometheus OVS/OVN exporter collects additional metrics related to
northdand RAFT. In addition, OVS/OVN metrics collection is now available on control plane pods. Previously, collection was only available on data plane nodes. For more information, see Network observability
- Dynamic routing with BGP support for IPv6 networks
In RHOSO feature release 4, you can configure your dynamic routing environment using IPv6 networks. For more information, see Preparing RHOCP for BGP networks on RHOSO.
2.4.3.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.14 before you deploy the release.
- Fixes load-balancer member re-enablement behavior
Before this update, disabled load-balancer members remained in an
ERRORstate even after being re-enabled, causing incorrect load balancer status reporting and potential traffic distribution issues. This problem was caused by the health monitor incorrectly reporting the disabled member status asERRORinstead ofOFFLINE. This incorrect status prevented the disabled member from transitioning back toONLINEwhen it was re-enabled. With this update, the health monitor correctly sets the disabled member status toOFFLINEand the problem has been resolved. Disabled members are now correctly marked asOFFLINEand the members automatically transition back toONLINEwhen they are re-enabled.
2.4.3.3. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.14 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- TAP-as-a-Service (TAPaas) [Technology Preview]
In this release, you can test a technology preview of TAPaaS.
TAPaaS provides a Openstack integrated framework for scalable port mirroring in a multi-tenant shared environment maintaining the tenant isolation boundaries in Openstack deployments. TAPaaS is a Neutron extension that enables on-demand traffic mirroring for tenant or administrator purposes. It allows users to create TAP services that mirror traffic from one or more Neutron ports and redirect it to a TAP destination—often a virtual Network Packet Broker (NPB), intrusion detection system (IDS), or traffic analyzer instance.
- Firewall-as-a-Service (Technology Preview)
In RHOSO 18.0.14 (Feature Release 4), you can test a technology preview of Firewall-as-a-Service (FWaaS). Do not use technology preview features in production environments.
As more OpenStack-based clouds are adopted for multi-tenant applications, security remains a top priority. Network-level isolation and traffic control become critical, especially in public or hybrid cloud environments.
Although security groups provide sufficient capability to specify security policy at a VM instance level or VM port level, it does not have support to specify policy at a network or router port level.
FWaaS project provides this additional capability to specify the security policies at the router port level and enables specifying multiple policy rules within the same policy group and also supports application of L3 or L2 policy at the router port level.
FWaaS also provides support for NGFW 3rd party plugins for integration with NGFW vendor solutions enabling firewall capabilities beyond the ACL level. Features and capabilities such as DPI, Malware protection, IPS and IDP.
To enable the FWaaS service plugin, add
firewall_v2toservice_pluginsin your control plane Custom Resource (CR) file as shown in the following example. The example includes other example services for context. These are not required for enabling FWaaS.To configure the technology preview of FWaaS, add the following settings in your control plane CR:
customServiceConfig: | [DEFAULT] service_plugins = qos,ovn-router,trunk,segments,port_forwarding,log,firewall_v2 [service_providers] service_provider = FIREWALL_V2:fwaas_db:neutron_fwaas.services.firewall.service_drivers.ovn.firewall_l3_driver.OVNFwaasDriver:defaultFor FWaaS usage examples, see "Configure Firewall-as-a-Service v2" in Firewall-as-a-Service (FWaaS) v3 scenario [1].
[1] https://docs.openstack.org/neutron/latest/admin/fwaas-v2-scenario.html
- Support Data plane adoption for BGP control plane (Technology Preview)
With this technology preview, you can upgrade in-place an OpenStack deployment from RHOSP 17.1 to RHOSO 18 by deploying a parallel control plane based on RHOSO 18 and then pointing the compute nodes to the new control plane.
2.4.4. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.4.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- Cleanup of obsolete network configurations
A NIC on a data plane node can be used for bare-metal provisioning and other initial configuration tasks. You can then use os-net-config to reconfigure the NIC for operational use in your deployment. In these cases, obsolete remnants of the initial configuration might cause IP address conflicts. To avoid such conflicts, you can use remove_config to clean up the obsolete configuration files from the initial configuration. For more information, see Cleaning up obsolete host network configurations.
2.4.4.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.14 before you deploy the release.
- The default
os-net-configprovider isnmstate In previous RHOSO releases, Red Hat did not support
NMstateas theos-net-config provider. It is now supported and the default configuration sets theos-net-configprovider tonmstate.The parameter is
edpm_network_config_nmstate. The default value istrue. If a specific limitation of thenmstateprovider requires you to use theifcfgprovider, changeedpm_network_config_nmstatetofalse.For more information, see "The nmstate provider for os-net-config" in the guide Planning your deployment
2.4.5. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.5.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- Prevent minor update from proceeding when the custom container images have not been updated
This enhancement ensures correct version tracking and validation during minor updates by preventing the side effects and inconsistencies that result from custom container images not being updated when the target version is updated.
With this update, when a minor update is initiated by setting the
targetVersion, the performance of the minor update is halted if thecustomImagesversion for the associated custom container images is not also updated. Users have the option to force the update if necessary.
- Update
rabbitmq-cluster-operatorto v2.16.0 This enhancement updates the RabbitMQ service Operator,
rabbitmq-cluster-operator, to version 2.16.0. With this update, the RabbitMQ clusters are restarted. If you want to control when the RabbitMQ clusters are updated, you can pause reconciliation before performing the update, then resume reconciliation when it is safe to update the RabbitMQ cluster.
- The OpenStack Operator supports customization of the controller manager for service Operators
The OpenStack Operator initialization resource creates each {rhocp_long} service Operator with default CPU and memory resource limits, and default tolerations. This enhancement adds the ability to customize the configuration of the resource limits and tolerations of each {rhocp_long} service Operator.
2.4.5.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.14 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine instance created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
2.4.6. High availability Copy linkLink copied to clipboard!
Understand the High availability updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.6.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- Configuring authentication for the memcached service
Starting with RHOSO 18.0.14 (Feature Release 4), you can configure the cache maintained by the memcached service to require authentication to increase the security of your cloud by restricting access to the cached data of your cloud. For more information, see Configuring authentication for the memcached service in Customizing the Red Hat OpenStack Services on OpenShift deployment.
- Adopt RHOSP 17.1 Instance HA environments to RHOSO
Starting with RHOSO 18.0.14 (Feature Release 4), you can adopt Red Hat OpenStack Platform (RHOSP) 17.1 environments with Instance HA enabled to RHOSO 18.0. For more information about adopting Instance HA environments, see Preparing an Instance HA deployment for adoption and link:https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/adopting_a_red_hat_openstack_platform_17.1_deployment/adopting-data-plane_adopt-control-plane#enabling-high-availability-for-instances_data-plane [Enabling the high availability for Compute instances service] in Adopting a Red Hat OpenStack Platform 17.1 deployment.
- Configuring quorum queues for RabbitMQ in new deployments
Starting with RHOSO 18.0.14 (Feature Release 4), RabbitMQ supports the use of the
Quorumqueues for new RHOSO deployments. AQuorumqueue is a durable, replicated queue based on the Raft consensus algorithm, providing increased data safety and high availability. For more information, see step 5 of link: Creating the control plane in the Deploying Red Hat OpenStack Services on OpenShift guide.
2.4.7. Security and hardening Copy linkLink copied to clipboard!
Understand the Security and hardening updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.7.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- Multi-realm federation support
Starting with RHOSO 18.0.14 (Feature Release 4), you can configure RHOSO to allow users to log in to the OpenStack Dashboard by using single sign-on (SSO) and select from one of several external Identity Providers (IdPs). For more information, see Configuring multi-realm federated authentication in RHOSO in Configuring security services.
2.4.8. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.8.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- Image service support for HTTPd customization
With this update, the Image service (glance) now supports the customization of HTTPd configuration files. You can use the
extraMountsparameter to include and load a customhttpd.conffile.
- Shared File Systems service support for HTTPd customization
With this update, the Shared File Systems service (manila) now supports the customization of HTTPd configuration files. You can use the
extraMountsparameter to include and load a customhttpd.conffile.
- Image service support for per-tenant quotas
With this update, the Image service (glance) supports per-tenant quotas for improved resource management in private clouds.
- Shared File Systems service (manila) with CephFS through NFS adoption is fully supported
Adopting the Shared File Systems service (manila) with CephFS through NFS is now generally available. Previously, these adoption instructions were provided as a Technology Preview.
This enhancement allows you to migrate your existing Red Hat OpenStack Platform 17.1 deployment that uses CephFS through NFS as a back end for the Shared File Systems Service to RHOSO 18.0 with full support.
The adoption process includes:
- Creating a new clustered NFS Ganesha service managed directly on the Red Hat Ceph cluster
-
Migrating export locations from the standalone Pacemaker-controlled
ceph-nfsservice to the new clustered service Decommissioning the previous standalone NFS service
For more information, see Changes to CephFS through NFS and Creating an NFS Ganesha cluster in Adopting a Red Hat OpenStack Platform 17.1 deployment.
- Full support for adopting environments that use iSCSI back ends for the Block Storage service (cinder)
Starting with RHOSO 18.0.14 (Feature Release 4), the procedure to adopt RHOSO 18.0 is fully supported for Red Hat OpenStack Platform 17.1 environments that use iSCSI as a back end for the Block Storage service (cinder). For more information, see Adopting the Block Storage service in Adopting a Red Hat OpenStack Platform 17.1 deployment.
- Image service Cache API support
With this update, the Image service (glance) adds Cache API support. The Image service Cache API provides centralized management of cache nodes, eliminating the requirement for individual SSH connections. Key functionalities include listing cached images of a specific node, queuing images for caching, and deleting cached images or images queued for caching. With this update, administrators can call the API and integrate API calls into workflows and automated tasks.
- Full support for adopting environments that use Block Storage service (cinder) back ends for the Image service (glance)
Starting with RHOSO 18.0.14 (Feature Release 4), RHOSO 18.0 adoption is fully supported for Red Hat OpenStack Platform 17.1 environments that use Block Storage service (cinder) as a back end for the Image service (glance). For more information, see Adopting the Image service that is deployed with a Block Storage service back end in Adopting a Red Hat OpenStack Platform 17.1 deployment.
- Deployment of Object Storage service on data plane nodes
With this update, you can deploy the Object Storage service (swift) on external data plane nodes, improving scalability and performance for large storage clusters. By enabling DNS forwarding and creating an
OpenStackDataPlaneNodeSetCR with specified properties, including disks for storage, you can customize the service configuration through additionalConfigMaporSecretCRs in theOpenStackDataPlaneServiceCR.
- Shared File Systems service now supports transferring shares between tenants
The Shared File Systems service (manila) now supports transferring shares across projects. To ensure security and non-repudiation, a one-time transfer secret key is generated when you initiate a transfer. The key must be conveyed out-of-band so that a user in the recipient project can complete the transfer.
- Integration of Shared File Systems service with Dell PowerScale
The Shared File Systems service (manila) now includes integration with Dell PowerScale storage systems (formerly Dell Isilon). The driver supports provisioning and managing the lifecycle of NFS and CIFS shared file systems, controlling client access to them, resizing them, creating snapshots, and cloning these snapshots into new shared file systems.
- Notifications for events in the Block Storage service and Shared File Systems service
With this update, you can enable notifications in the Block Storage service (cinder) and Shared File System service (manila) by using the
notificationsBusInstanceparameter, allowing integration with either the existing RabbitMQ instance or a dedicated RabbitMQ instance.
2.4.8.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.14 before you deploy the release.
- Creation of snapshots for Block Storage service volumes succeeds in ONTAP FlexGroup pools
Before this update, the creation of snapshots for Block Storage service (cinder) volumes did not succeed in ONTAP FlexGroup pools. With this update, the creation of volume snapshots succeeds, ensuring data backup for end users.
- Image volume cache deletes PowerFlex volumes when the snapshot limit is reached
Before this update, when a PowerFlex volume in the image volume cache reached its snapshot limit, the Block Storage service (cinder) replaced the cache entry. However, the volume itself was not deleted, resulting in unusable volumes that consumed quota. Now, when a PowerFlex volume in the image volume cache reaches its snapshot limit, the Block Storage service replaces the cache entry and deletes the PowerFlex volume for the original cache entry.
- Improved clone deletion management
Before this update, NetApp ONTAP storage systems on version 9.13.1 or later rejected clone deletion requests for FlexVol snapshots and FlexClones if they were still processing. With this update, the NetApp ONTAP driver in the Shared File Systems service (manila) manages clone splitting operations before share or snapshot deletion, preventing deletion failures. Users can now delete Shared File Systems service snapshots and shares created from snapshots any time after creation without encountering clone deletion failures.
2.4.8.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.14 before you deploy the release.
- Commands for quota usage failing with errors
In RHOSO 18, the
cinder-manage quota checkandcinder-manage quota synccommands fail when no project-id argument is specified, preventing accurate management of quota usage in the Block Storage service (cinder).Workaround: There is currently no workaround for this issue.
- Image service uploads image that exceeds size quota before rejecting further uploads
When you upload an image to the Image service (glance) that is larger than the configured image size limit (
image_size_total), the upload succeeds because the Image service does not verify the image size before upload. When the image is uploaded and stored, the Image service determines the image size, which might exceed the quota. However, the Image service rejects any subsequent uploads because the quota is exceeded.
2.4.9. Dashboard Copy linkLink copied to clipboard!
Understand the Dashboard updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.9.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- Customizing the Dashboard httpd timeout value
With this update, you can customize the httpd timeout value of your Dashboard
horizon-operatorby using extraMounts to load a httpd.conf file. For more information, see link: https://github.com/openstack-k8s-operators/horizon-operator/blob/main/config/samples/httpd-overrides/README.md
- Direct upload method and CORS enabled by default for image uploads to the Dashboard service
With this update, the
directupload method is the default upload method in the Dashboard service (horizon). Thedirectupload method requires Cross-Origin Resource Sharing (CORS) to be enabled in the Image service (glance), and CORS is now enabled by default in theglance-operator.As a result, you can use the web UI in the Dashboard service to upload images that are greater than 1 GiB in size. The direct upload method sends the image directly from the web browser to the Image service instead of storing the image in the Dashboard service first and then sending it to the Image service.
2.4.10. Optimize Service Copy linkLink copied to clipboard!
Understand the Optimize Service updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.10.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- The Optimize service (watcher) is now integrated into the OpenStack Operator
Before this update, the Optimize service (watcher) was provided as a technical preview that was installed with its own Operator. With this update, the Optimize service is now integrated into the OpenStack Operator and is fully supported.
If you have installed the Optimize service as a technical preview, you must remove the Optimize service Operator and any custom resource of kind `Watcher` before updating your RHOSO deployment to 18.0.14 or later.
- Improve Optimize service accuracy with OpenStack services notifications
With this update, you can now enable the Optimize service (watcher) to receive notification updates from other OpenStack services, which improves Optimize service accuracy.
- Start and end time fields available for CONTINUOUS audits in the OpenStack Dashboard (horizon)
With this update, you can now set the start and end times when creating
CONTINUOUSaudits in the OpenStack Dashboard (horizon).
- Parameters field included in the OpenStack Dashboard (horizon)
With this update, you can now include parameters as JSON values when creating an Audit in the Openstack Dashboard.
- Filtered strategy selction in the OpenStack Dashboard (horizon)
With this update the OpenStack Dashboard now limits visible strategies to those applicable to the selected goal.
2.4.10.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.14 before you deploy the release.
- Live migration fails after migrating the volume that an instance has attached
Before this update, legacy code in the Optimize service (watcher) related to the volume move operation could result in two instances having access to the same host block device, which could lead to corruption or incorrect data access between tenants after a volume was migrated.
This update resolves the data corruption and data access issues by removing legacy code and delegating volume migrations to the Block storage service (cinder).
2.4.10.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.14 before you deploy the release.
- Workflow Engine does not revert actions for failed Action Plans
In RHOSO 18, the Optimize service (watcher) Engine does not automatically revert failed actions when an Action Plan fails, when configured to do so by enabling the
watcher_applier.rollback_when_actionplan_failedconfiguration option.Workaround: Manually revert each failed action in the Action Plan. To avoid the rollback, you can diagnose and fix the root cause of the failure and then run the Audit again to propose a new solution.
- Volume migration failures in watcher
Currently, if you attempt to migrate a volume when a volume type has a
volume_backend_nameparameter value that does not match the destination poolvolume_backend_nameparameter value, an error is raised.Workaround: Configure all volume types and cinder pools that will participate in volume migrations to a common value for the
volume_backend_name.
2.4.11. Migration Copy linkLink copied to clipboard!
Understand the Migration updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.11.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.14 before you deploy the release.
- Migrate VMs with VMware Migration Toolkit
In RHOSO 18.0.14 (Feature Release 4) and RHOSP 17.1, you can now migrate workloads from VMware to OpenStack using the VMware Migration Toolkit.
2.4.12. Hardware Provisioning Copy linkLink copied to clipboard!
Understand the Hardware Provisioning updates introduced in RHOSO 18.0.14 before you deploy the release.
2.4.12.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.14 before you deploy the release.
- Error replacing unprovisioned nodes
Red Hat OpenStack Services on Openshift (RHOSO) uses
metal3for provisioning unprovisioned dataplane nodes. An error state occurs when you must replace a node where thebootMacAddresscannot be updated. The result is that the node is stuck in a state where it must be completely removed from the deployment and provisioned as if it is a new node. If theautomatedCleaningModeattribute is setautomatedCleaningMode: disabled, this error state does not occur.Workaround: When provisioning unprovisioned dataplane nodes, ensure the
automatedCleaningModeattribute is set toautomatedCleaningMode:disabled.
2.5. Release information RHOSO 18.0.13 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.13 updates introduced in RHOSO 18.0.13 before you deploy the release.
2.5.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisory:
- RHBA-2025:17561
- Release of components for RHOSO 18.0.13
- RHBA-2025:17990
- Release of containers for RHOSO 18.0.13
2.5.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.13 before you deploy the release.
2.5.2.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.13 before you deploy the release.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including those used by instances. If thecpu_statestrategy is used, the CPUs of those instances become unpinned.
2.5.3. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.13 before you deploy the release.
2.5.3.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.13 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine instance created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
2.5.4. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.13 before you deploy the release.
2.5.4.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.13 before you deploy the release.
- IPv6 export locations cannot be used with Shared File Systems service shares that have a CephFS-NFS back end
An issue with Red Hat Ceph Storage prevents the use of IPv6 export locations with Shared File Systems service (manila) shares that have a CephFS-NFS back end. Workaround: Currently, there is no workaround.
2.5.5. Upgrades and updates Copy linkLink copied to clipboard!
Understand the Upgrades and updates updates introduced in RHOSO 18.0.13 before you deploy the release.
2.5.5.1. Deprecated functionality Copy linkLink copied to clipboard!
Understand the deprecated functionality introduced in RHOSO 18.0.13 before you deploy the release.
Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments.
- Deprecated the
updateservice RHOSO 18.0.10 (Feature Release 3) introduced a new update workflow that splits the OpenStack-related package updates and system-related package updates. With this new feature, the data plane
updateservice is deprecated in favor of the splitupdate-servicesandupdate-systemservices. Theupdateservice will be removed in a future release. Customers should transition to using the split update feature. For more information about the split update feature, see the Red Hat Knowledgebase article Performing a minor update of OpenStack service containers and RHEL RPMs separately.
2.5.6. Optimize Service Copy linkLink copied to clipboard!
Understand the Optimize Service updates introduced in RHOSO 18.0.13 before you deploy the release.
2.5.6.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.13 before you deploy the release.
- Volume migration failures in watcher
Currently, if you attempt to migrate a volume when a volume type has a
volume_backend_nameparameter value that does not match the destination poolvolume_backend_nameparameter value, an error is raised.Workaround: Configure all volume types and cinder pools that will participate in volume migrations to a common value for the
volume_backend_name.
- Workflow Engine does not revert actions for failed Action Plans
In RHOSO 18, the Optimize service (watcher) Engine does not automatically revert failed actions when an Action Plan fails, when configured to do so by enabling the
watcher_applier.rollback_when_actionplan_failedconfiguration option.Workaround: Manually revert each failed action in the Action Plan. To avoid the rollback, you can diagnose and fix the root cause of the failure and then run the Audit again to propose a new solution.
2.6. Release information RHOSO 18.0.12 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.12 updates introduced in RHOSO 18.0.12 before you deploy the release.
2.6.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:15803
- Release of containers for RHOSO 18.0.12
- RHBA-2025:15804
- Control plane Operators for RHOSO 18.0.12
- RHBA-2025:15805
- Release of components for RHOSO 18.0.12
- RHBA-2025:15806
- Data plane Operators for RHOSO 18.0.12
- RHBA-2025:16120
- Containers bug fix advisory for RHOSO 18.0.12
2.6.2. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.12 before you deploy the release.
2.6.2.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.12 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine instance created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
2.6.3. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.12 before you deploy the release.
2.6.3.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.12 before you deploy the release.
- IPv6 export locations cannot be used with Shared File Systems service shares that have a CephFS-NFS back end
An issue with Red Hat Ceph Storage prevents the use of IPv6 export locations with Shared File Systems service (manila) shares that have a CephFS-NFS back end. Workaround: Currently, there is no workaround.
2.7. Release information RHOSO 18.0.11 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.11 updates introduced in RHOSO 18.0.11 before you deploy the release.
2.7.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:14763
- Control plane Operators for RHOSO 18.0.11
- RHBA-2025:14747
- Data plane Operators for RHOSO 18.0.11
- RHBA-2025:14762
- Release of containers for RHOSO 18.0.11
- RHBA-2025:14745
- Release of components for RHOSO 18.0.11
2.7.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.11 before you deploy the release.
2.7.2.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.11 before you deploy the release.
- Enhanced
os-vifOVS plugin improves network performance on OVS interfaces Previously, a bug fix was made to OVS that changed the application of the kernel’s default QOS policy to OVS ports. This fix was applied to 17.1 but not to 18.0. As a result, a regression in the network configuration for OVS interfaces negatively impacted network performance of Openstack instances when using kernel OVS. With this update, the
os-vifOVS plugin has been enhanced to improve network performance on OVS interfaces by using the linux-noop QOS policy by default. This can still be overridden by neutron QOS policies. To apply the update, restart or move the instance to recreate the port with a hard reboot, perform a detach followed by an attach operation, or perform a live migrate operation.
- Compute instance with ISO image boots correctly
Previously, a Compute instance with an ISO image booted via a block device instead of CD-ROM. This prevented the RHEL Kickstart installation from initiating from the CD-ROM. With this bug fix, Compute boots the instance from the ISO image correctly and via CD-ROM.
2.7.2.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.11 before you deploy the release.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including those used by instances. If thecpu_statestrategy is used, the CPUs of those instances become unpinned.
2.7.3. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.11 before you deploy the release.
2.7.3.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.11 before you deploy the release.
- OpenStack Operator checks namespace field for upgrade of Operators
This update fixes an issue where upgrades from OpenStack Operator version 1.0.6 or earlier sometimes failed when OpenShift Lifecycle Manager (OLM) Operator resources contained data with no namespace field defined.
With this update, the OpenStack Operator checks that the namespace field is implemented for Operator references in the OpenStack controller and the OpenStack Service Operators upgrade is not affected.
2.7.3.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.11 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine instance created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
2.7.4. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.11 before you deploy the release.
2.7.4.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.11 before you deploy the release.
- Multipart image upload with S3 back end
Before this update, you had to use the image import workflow to upload multipart images if you had an S3 back end for the Image service (glance). With this update, you can set
s3_store_large_object_sizeto0to force multipart upload when you create an image in the S3 back end from a Block Storage service (cinder) volume.
2.7.4.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.11 before you deploy the release.
- IPv6 export locations cannot be used with Shared File Systems service shares that have a CephFS-NFS back end
An issue with Red Hat Ceph Storage prevents the use of IPv6 export locations with Shared File Systems service (manila) shares that have a CephFS-NFS back end. Workaround: Currently, there is no workaround.
2.7.5. Optimize Service Copy linkLink copied to clipboard!
Understand the Optimize Service updates introduced in RHOSO 18.0.11 before you deploy the release.
2.7.5.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.11 before you deploy the release.
- Volume migration operations are technology preview
Volume migration operations that are a part of the zone migration strategy are provided as a technology preview only, and should not be used in production.
2.8. Release information RHOSO 18.0.10 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.10 updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:12089
- Release of components for RHOSO 18.0.10 (Feature Release 3)
- RHBA-2025:12090
- Release of containers for RHOSO 18.0.10 (Feature Release 3)
- RHSA-2025:12091
- Security release of Control plane Operators for RHOSO 18.0.10 (Feature Release 3)
- RHBA-2025:12092
- Data plane Operators for RHOSO 18.0.10 (Feature Release 3)
2.8.2. Observability Copy linkLink copied to clipboard!
Understand the Observability updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.2.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.10 before you deploy the release.
- The Telemetry service collects telemetry related to RHOSO database services
This enhancement implements a new exporter that enables observability of the MariaDB databases that run within RHOSO.
- Compute metrics are available to Prometheus for telemetry data collection and storage
Telemetry data for Compute nodes is now collected directly from Prometheus rather than transiting the internal message bus, enabling storage of Compute node telemetry in the telemetry storage system.
[NOTE] You cannot collect Compute node metrics from Prometheus in an IPv6 environment.
2.8.2.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.10 before you deploy the release.
- Collection of telemetry data no longer disrupted by DNS search domains
This update fixes an issue where DNS search domains (
dns_search_domains) shorter than 8 characters that appeared alphabetically before the control plane DNS domain caused disruption in the collection of telemetry data.
2.8.3. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.3.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.10 before you deploy the release.
- Enablement of Nova notifications in RHOSO
This update adds support for configuring a dedicated notifications message bus in the
nova-operator. By setting thenotificationsBusInstancein the Nova custom resource (CR), operators can specify an external RabbitMQ for emitting versioned and unversioned notifications.The
[notification]and[oslo_messaging_notifications]sections are rendered innova.conf.When
novaEnabledNotificationis set and atransport_urlis provided via an OpenShift secret,nova-computeemits structured notifications to external systems, improving integration and observability in RHOSO environments.To enable Nova notifications in RHOSO, you update the
OpenStackControlPlaneCR to add a new RabbitMQ instance and reference it in the Nova CR by usingnotificationsBusInstance. Thenova-operatorconfigures control plane services automatically.For data plane updates, redeploy the data plane nodes.
- Support for data plane adoption of the source cloud with multiple nova cells
The cloud operator can now adopt an existing 17.1 multi-cell nova deployment with a common network from TripleO management to the new installer for RHOSO.
2.8.3.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.10 before you deploy the release.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including those used by instances. If thecpu_statestrategy is used, the CPUs of those instances become unpinned.
2.8.4. Hardware Provisioning Copy linkLink copied to clipboard!
Understand the Hardware Provisioning updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.4.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.10 before you deploy the release.
- Improved logging and error handling for cross-controller packet loss
Before this update, cross-controller packet loss could impact request handling by the python-networking-baremetal agent and prevent physical network mapping updates from occurring in the Networking service (neutron) for bare-metal nodes. With this update, there is additional logging and error handling so that the python-networking-baremetal provided service exits and the container can automatically restart if packet loss occurs. Physical network mappings for bare-metal nodes continue to to be updated if network interruptions for Controller nodes occur.
- Workflow operations persist through interruptions in connectivity
This update solves an issue in the Bare Metal Provisioning service (ironic) that caused the deployment process to loop and time out because of interruptions in connectivity while the deployment agent was starting. The issue occurred because only one attempt was made to evaluate if a RAM drive was recently booted. When this issue occurred, the bare metal nodes would fail to clean, deploy, or perform other workflow actions.
2.8.5. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.5.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.10 before you deploy the release.
- Adoption of combined Networker/Controller nodes
Adoption of RHOSP 17.1 environments that use combined Controller/Networker nodes are verified to work as documented in Adopting a Red Hat OpenStack Platform 17.1 deployment.
2.8.5.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.10 before you deploy the release.
- North/south fragmentation fix
Before this update, OpenStack did not fragment north-south packets as expected when the external maximum transmission unit (MTU) was less than the internal MTU, which resulted in packets being dropped silently. With this update, fragmentation happens as expected, and packets are not dropped silently.
- Fix improves BGP recovery time
Before this update, disabling Bidirectional Forwarding Detection (BFD) in Free Range Routing (FRR) on RHEL 9.4 could cause traffic disruptions during error recovery.
With this release, you no longer need to disable BFD for BGP peers. Operating with BFD for BGP peers enhances BGP time recovery and minimizes traffic disruption.
2.8.5.3. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.10 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- DNS as a service
With this technology preview, you can test the management of DNS records, names, and zones using the DNS service (designate). For more information, see Configuring DNS as a service.
- Load-balancing service (octavia) support for DCN
With this technology preview, you can test creating load balancers in availability zones (AZs) to increase traffic throughput, reduce latency, and enhance security. For more information, see Creating availability zones for load balancing of network traffic at the edge.
- Create Load-balancing service resources for a specific project
Load-balancing service (octavia) resources are created by default within a project (tenant) service. RHOSO 18.0.10 (Feature Release 3) introduces a technology preview of a new
TenantNameparameter for the Octavia Operator, which restricts the use of the resource to a specific project. RHOSO administrators can also change the domain of the project.
2.8.5.4. Deprecated functionality Copy linkLink copied to clipboard!
Understand the deprecated functionality introduced in RHOSO 18.0.10 before you deploy the release.
Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments.
- Deprecation of
ovn-bgp-agent Since RHOSO 18.0.10 (Feature Release 3), the OVN BGP Agent
ovn-bgp-agentis deprecated.ovn-bgp-agentis the BGP integration component in RHOSO. An alternative BGP integration mechanism is scheduled for a future release. Until then, Red Hat will provide only bug fixes and support for this feature.
2.8.6. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.6.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.10 before you deploy the release.
- TSO for OVS-DPDK promoted from technology preview to general availability
RHOSO 18.0.6 (Feature Release 2) introduced a technology preview of TCP segmentation offload (TSO) for RHOSO environments with OVS-DPDK.
As of RHOSO 18.0.10 (Feature Release 3), TCP segmentation offload (TSO) for RHOSO environments with OVS-DPDK is a general availability feature.
2.8.6.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.10 before you deploy the release.
- Adoption no longer fails when physical function is attached to an instance
Previously, when the physical function (PF) was attached to the instance, if
os-net-configwas re-run,os-net-configcould not find the SR-IOV PF in the host, and the adoption failed. With this release, the adoption does not fail.
- Fixes
NetworkManager-dispatcherfailures Before this update, the
NetworkManager-dispatcherservice was blocked by SELinux permission denial, causing the service to fail when SELinux was enforced. With this release, NetworkManager has been updated to allow running the`NetworkManager-dispatcher` service with SELinux in enforcing mode. As a result, theNetworkManager-dispatcherservice now runs with SELinux in enforcing mode, eliminating the failures.
- Data plane deployment no longer fails when using the
nmstateprovider to pre-provision Compute nodes over VLAN Before this update, when pre-provisioning Compute nodes for communicating with the control plane over VLANs, theNetworkManager CLI (
nmcli) connection was not always created with the proper interface name. This caused deployment failures.With this release, the issue with the
nmstateprovider for handling vlan interfaces in pre-provisioned nodes has been resolved. As a result, data plane deployments using thenmstateprovider succeeds.
- Fixes
edpm_network_config_nonconfigured_cleanupparameter issue The flag
edpm_network_config_nonconfigured_cleanup: truewas introduced as default in Feature Release 2 and caused some new deployments to fail.With this update, appropriate use of the flag
edpm_network_config_nonconfigured_cleanup: trueno longer causes deployment failures.You can now set
edpm_network_config_nonconfigured_cleanup: truewhen you do the following configurations:- Use unprovisioned or pre-provisioned nodes with a VLAN-tagged interface using either the ifcfg or nmstate provider.
Have multiple dataplanes with separate namespaces and a tagged VLAN on the control plane interface.
Set
edpm_network_config_nonconfigured_cleanup: falsewhen you do the following configurations:- Use unprovisioned or pre-provisioned physical interface with a flat network or bond using either the ifcfg or nmstate provider.
- Perform network updates or RHOSO minor updates.
- Perform a data plane adoption.
Have multiple data planes with separate namespaces and a flat network on the control plane interface.
- Bandwidth limit now applied to instances with VLAN and flat ports with
nmstateprovider Previously, in environments using the
os-net-confignmstateprovider, QoS bandwidth limit rules were not properly applied to the physical NIC attached to thebr-exbridge.With this update, the QoS bandwidth limit rules are applied.
- Patch ports no longer cause network update failures
This update fixes an issue in environments with the nmstate provider where network updates failed on Compute nodes that hosted active instances with patch ports present in br-ex.
2.8.7. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.7.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.10 before you deploy the release.
- Multiple RHOSO deployments on a single RHOCP cluster by using namespace separation
This feature enables you to deploy multiple RHOSO environments on a single RHOCP cluster by using namespace (project) isolation for development, staging and testing environments.
NoteMultiple RHOSO environments on a single cluster are not supported for production environments.
For more information, see Deploying multiple RHOSO environments on a single RHOCP cluster
- Minimizing downtime during a minor update
During a minor update, the control plane services are updated concurrently. This enhancement isolates the
galera,rabbitmq,memcached, andkeystoneservices to perform their updates consecutively, in order, within the minor control plane services update phase.
- Documentation: Updated "Installing the OpenStack Operator" procedure
The Installing the OpenStack Operator procedure has been updated to include changing the default automatic Operator update approvals to manual approvals. Using manual update approval enables RHOSO administrators to schedule RHOSO Operator updates.
2.8.7.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.10 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine instance created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
- Upgrading the OpenStack Operator can fail due to Operators that are not namespaced
This update fixes an issue where upgrades from OpenStack Operator version 1.0.6 or earlier sometimes failed when OpenShift Lifecycle Manager (OLM) Operator resources contain data with no namespace field defined.
With this update, the OpenStack Operator checks that the namespace field is implemented for Operator references in the OpenStack controller and the OpenStack Service Operators upgrade is not affected.
2.8.8. High availability Copy linkLink copied to clipboard!
Understand the High availability updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.8.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.10 before you deploy the release.
- NodeName string updated in JSON tag for BGPConfiguration parameter
Before this update, the BGPConfiguration parameter
spec.frrNodeConfigurationSelector.nodenamehad an inconsistency in its JSON tag where the NodeName stringjson:"frrConfigurationNamespace,omitemptywas incorrect becausefrrConfigurationNamespaceis a node name. With this update, the NodeName string in the JSON tag is correctly set asjson:"nodeName,omitempty". You can now configure thefrrNodeConfigurationSelectorby using the following spec:frrNodeConfigurationSelector: - nodeName: nodeA nodeSelector: matchLabels: foo: barDuring an update to the fixed version, any node names that you previously specified by using the
frrConfigurationNamespaceJSON tag are removed and you must use the correctnodeNameJSON tag to reconfigure your node names.
2.8.9. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.9.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.10 before you deploy the release.
- S3 driver for Image service (glance) has option to specify path to CA bundle
With this update, the S3 driver for the Image service has a new
s3_store_cacertoption that allows users to specify the path to a Certificate Authority (CA) bundle to use.
- Red Hat Ceph Storage 8 NFS is supported
Before this update, NFS was not supported when integrating with Red Hat Ceph Storage 8. With this update, NFS is now supported with Red Hat Ceph Storage 8 integrations.
- API token based authentication with the VAST Data storage driver in the Shared File Systems Service (manila)
With this update, cloud administrators can use either
vast_mgmt_userandvast_mgmt_passwordorvast_api_tokenwhen configuring authentication in the Shared File Systems service for their VAST Data storage systems. API-based authentication is useful in RHOSO deployments if cloud administrators need an alternative to passwords when specifying VAST Data API users.
- Improved Fibre Channel performance when detaching a volume
With this update, there is improved Fibre Channel performance when detaching a volume because there is no longer a requirement to call the
lsscsicommand.
- Distributed zones with third-party storage
RHOSO 18.0.10 (Feature Release 3) supports the integration of third-party storage appliances within distributed zone environments. The NFS and Fiber Channel protocols in distributed zone environments are provided as a technology preview and are not yet recommended for production use.
- Image service (glance) notifications for events in image lifecycle
With this update, you can enable notifications in the Image service by using the
notificationBusInstanceparameter, allowing integration with either the existing RabbitMQ instance or a dedicated one.
- CephFS file name added to CephFS share metadata
With this update, you can check a CephFS file name when mounting a native CephFS share by viewing the
__mount_optionsmetadata of the share in the output of the following command:$ openstack share show <share_id>
2.8.9.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.10 before you deploy the release.
- Improved reliability for Fibre Channel volume attachments
Before this update, Fibre Channel volume attachments failed intermittently with a
NoFibreChannelVolumeDeviceFounderror due to partial scanning of devices. With this update, a broader scan results in better discovery of devices and successful attach operations.
2.8.9.3. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.10 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Added options for customizing the Object Storage service (swift)
With this update, you can test two new options to customize deployments of the Object Storage service by using externally-managed rings. With this technology preview, you can now disable automatic ring management and spread large rings over multiple configmaps.
2.8.9.4. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.10 before you deploy the release.
- Multipart image upload does not work with S3 back end
If you upload multipart images with an S3 back end, you must use the import workflow.
- Red Hat Ceph Storage 8 Object Gateway is not supported
The Red Hat Ceph Storage Object Gateway (RGW) is currently not supported when integrating with Red Hat Ceph Storage 8.
Workaround: There is no current workaround.
2.8.10. Upgrades and updates Copy linkLink copied to clipboard!
Understand the Upgrades and updates updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.10.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.10 before you deploy the release.
- Granular package update workflow
RHOSO 18.0.10 (Feature Release 3) introduces a feature to separate the update process for RHOSO EDPM nodes into two distinct phases:
- updating OpenStack (containers & essential packages) and
updating the system (all packages).
This separation gives operators finer control over the update process, reducing risks and simplifying troubleshooting in the event of issues.
2.8.11. Optimize Service Copy linkLink copied to clipboard!
Understand the Optimize Service updates introduced in RHOSO 18.0.10 before you deploy the release.
2.8.11.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.10 before you deploy the release.
- The
dst_nodeparameter is now optional for the Zone migration strategy Before this update, the implementation of the Zone migration strategy was affected by the
dst_nodeparameter. Now the implementation is in line with API schema and thedst_nodeparameter is optional. If you do not specify a value fordst_node, the Nova scheduler selects an appropriate host automatically.
2.8.11.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.10 before you deploy the release.
- Fix for RHOSO Watcher
Action Plansstatus This update fixes an issue where RHOSO Watcher did not correctly report the state of
Action Plansafter allActionsfinished, for example reportingSUCCESSif someActionsactually finished with a state ofFAILED.
2.8.11.3. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.10 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Support for new strategies for Optimize service (watcher)
RHOSO 18.0.10 (Feature Release 3) introduces support for three new supported strategies in the Optimize service: host maintenance, zone migration for instances, and workload balance. For more information about supported strategies to achieve resource optimization goals, see Sample Optimize service workflows in Optimizing infrastructure resource utilization.
2.8.11.4. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.10 before you deploy the release.
- Volume migration operations are technology preview
Volume migration operations that are a part of the zone migration strategy are provided as a technology preview only, and should not be used in production.
2.9. Release information RHOSO 18.0.9 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.9 updates introduced in RHOSO 18.0.9 before you deploy the release.
2.9.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:9211
- Control plane Operators for RHOSO 18.0.9
- RHBA-2025:9212
- Data plane Operators for RHOSO 18.0.9
- RHBA-2025:9213
- Release of containers for RHOSO 18.0.9
- RHBA-2025:9214
- Release of components for RHOSO 18.0.9
2.9.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.9 before you deploy the release.
2.9.2.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.9 before you deploy the release.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including ones used by instances. If thecpu_statestrategy is used, the CPUs of those instances become unpinned.
2.9.3. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.9 before you deploy the release.
2.9.3.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.9 before you deploy the release.
- The
redhatservice is restored to the default list of data plane services Before this update, the
redhatservice was removed temporarily from the default list of data plane services, and users had to manually add theredhatservice to the list of services in theOpenStackDataPlaneNodeSetCR. With this update, theredhatservice is restored to the default list of data plane services.
2.9.4. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.9 before you deploy the release.
2.9.4.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.9 before you deploy the release.
- QoS information for VLAN or flat network ports persists through port updates
Any VLAN or flat network port with egress QoS policy rules (maximum and/or minimum bandwidth) stores this information in the
Logical_Switch_Port. optionsdictionary. Before this update, any update on this port, from a port name change to a live migration, deleted this QoS information. With this update, the QoS information persists through port updates.
2.9.4.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.9 before you deploy the release.
- Packets silently dropped when external MTU is greater than internal MTU
RHOSO does not fragment north-south packets as expected when the external MTU is greater than the internal MTU. Instead, the ingress packets are dropped with no notification.
Also, fragmentation does not work on east/west traffic between tenant networks.
Until these issues are resolved, ensure that the external MTU settings are less than or equal to internal MTU settings, and that all MTU settings on east/west paths are equal.
Workaround:
Until these issues are resolved, perform the following steps to ensure that the external MTU settings are less than or equal to internal MTU settings, and that all MTU settings on east/west paths are equal.
-
Set
ovn_emit_need_to_fragtotrue. -
Set
global_physnet_mtuto a size that is at least 58 bytes larger than the external network MTU, to accommodate the geneve tunnel encapsulation overhead. -
Set
physical_network_mtusvalue pairs to describe the MTU of each physical network. - Ensure that the MTU setting on every device on the external network is less than the internal MTU setting.
- To apply the changes to an existing router, delete the router and re-create it.
-
Set
- Example
For example, suppose that the external network
datacentreMTU is 1500.Enter the following neutron settings in your OpenStackControlPlane CR:
neutron: enabled: true : template: : customServiceConfig: | [DEFAULT] global_physnet_mtu=1558 [ml2] physical_network_mtus = ["datacentre:1500_{context}"] [ovn] ovn_emit_need_to_frag = true- Ensure that the MTU setting on every device on the external network is less than the internal MTU setting.
- Ensure that all tenant networks that use the OVN router have the same MTU.
To apply the changes to an existing router, delete the router and re-create it.
2.9.5. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.9 before you deploy the release.
2.9.5.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.9 before you deploy the release.
- Data plane deployment no longer fails when using the nmstate provider to pre-provision Compute nodes over VLAN
Before this update, when pre-provisioning Compute nodes for communicating with the control plane over VLANs, theNetworkManager CLI (
nmcli) connection was not always created with the proper interface name. This caused deployment failures.With this release, the issue with the nmstate provider for handling vlan interfaces in pre-provisioned nodes has been resolved. As a result, data plane deployments using the nmstate provider succeeds.
2.9.6. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.9 before you deploy the release.
2.9.6.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.9 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine (VM) created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
2.9.7. High availability Copy linkLink copied to clipboard!
Understand the High availability updates introduced in RHOSO 18.0.9 before you deploy the release.
2.9.7.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.9 before you deploy the release.
- The Instance HA service supports a new parameter
This enhancement adds the
TAGGED_AGGREGATESparameter to the RHOSO high availability for Compute instances (Instance HA) service. By default, this parameter is set totrue, so that the Instance HA service checks for tagged host aggregates. If you set this parameter tofalsethen the Instance HA service does not check for tagged host aggregates and therefore will evacuate all the eligible Compute nodes.
2.9.8. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.9 before you deploy the release.
2.9.8.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.9 before you deploy the release.
- Multipart image upload does not work with S3 back end
If you upload multipart images with an S3 back end, you must use the import workflow.
- Red Hat Ceph Storage 8 NFS is not supported
NFS is currently not supported when integrating with Red Hat Ceph Storage 8.
Workaround: There is no current workaround.
- Red Hat Ceph Storage 8 Object Gateway is not supported
The Red Hat Ceph Storage Object Gateway (RGW) is currently not supported when integrating with Red Hat Ceph Storage 8.
Workaround: There is no current workaround.
2.10. Release information RHOSO 18.0.8 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.8 updates introduced in RHOSO 18.0.8 before you deploy the release.
2.10.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:8036
- Control plane Operators for RHOSO 18.0.8
- RHBA-2025:8037
- Data plane Operators for RHOSO 18.0.8
- RHBA-2025:8038
- Release of containers for RHOSO 18.0.8
- RHBA-2025:8039
- Release of components for RHOSO 18.0.8
2.10.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.8 before you deploy the release.
2.10.2.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.8 before you deploy the release.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including ones used by instances. If thecpu_statestrategy is used, the CPUs of those instances become unpinned.
2.10.3. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.8 before you deploy the release.
2.10.3.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.8 before you deploy the release.
- Default policy ensures nftables reload at the end of deployment
Before this update, iptables default tables were added to nftables to ensure backwards compatibility. However, there was a default ALLOW INPUT rule instead of a default DROP rule, and nftables were not reloaded at the end of the deployment. With this update, the correct rules are applied to ensure that nftables are reloaded at the end of the deployment.
2.10.3.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.8 before you deploy the release.
- Manually add the
redhatservice to the default list of data plane services The
redhatservice has been removed temporarily from the default list of data plane services. As a result, when you attach subscriptions or repositories to Compute nodes and use the documentedrhc_*parameters when creating the data plane secrets, the nodes are not registered and the data plane deployment fails.Workaround: Override the services list in your
OpenStackDataPlaneNodeSetCR, and ensure that you add theredhatservice as the first service in the list. You can copy the default list shown in Data plane services in Customizing the Red Hat OpenStack Services on OpenShift deployment.
2.10.4. Hardware Provisioning Copy linkLink copied to clipboard!
Understand the Hardware Provisioning updates introduced in RHOSO 18.0.8 before you deploy the release.
2.10.4.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.8 before you deploy the release.
- Bare-metal data plane node with multi-path block storage boots after being provisioned
Before this update, pre-built whole disk images did not include the
device-mapper-multipathpackage, which prevented the paired boot ramdisk from supporting multi-path block storage. This caused bare-metal nodes with multi-path block storage to fail to boot after deployment and instead be stuck in an emergency shell. With this update, pre-built whole disk images now include thedevice-mapper-multipathpackage and deployed bare-metal nodes no longer enter an emergency shell after being deployed.
2.10.5. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.8 before you deploy the release.
2.10.5.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.8 before you deploy the release.
- Logs now available for FRR service
Before this update, no logs were available for the Free Range Routing (FRR) service, which is deployed on the data plane nodes when RHOSO is configured to use Dynamic Routing with BGP. With this update, these logs are available.
- Legacy tripleo Networking services are removed after adoption
Before this update, there were legacy tripleo Networking service (neutron) services after the
edpm_tripleo_cleanuptask, which needed to be removed manually. These services were stopped after adoption, so the RHOSO services were not affected. With this update, all tripleo Networking services are removed after adoption on data plane nodes.
2.10.5.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.8 before you deploy the release.
- Packets silently dropped when external MTU is greater than internal MTU
RHOSO does not fragment north-south packets as expected when the external MTU is greater than the internal MTU. Instead, the ingress packets are dropped with no notification.
Also, fragmentation does not work on east/west traffic between tenant networks.
Until these issues are resolved, ensure that the external MTU settings are less than or equal to internal MTU settings, and that all MTU settings on east/west paths are equal.
Workaround:
Until these issues are resolved, perform the following steps to ensure that the external MTU settings are less than or equal to internal MTU settings, and that all MTU settings on east/west paths are equal.
-
Set
ovn_emit_need_to_fragtotrue. -
Set
global_physnet_mtuto a size that is at least 58 bytes larger than the external network MTU, to accommodate the geneve tunnel encapsulation overhead. -
Set
physical_network_mtusvalue pairs to describe the MTU of each physical network. - Ensure that the MTU setting on every device on the external network is less than the internal MTU setting.
- To apply the changes to an existing router, delete the router and re-create it.
-
Set
- Example
For example, suppose that the external network
datacentreMTU is 1500.Enter the following neutron settings in your OpenStackControlPlane CR:
neutron: enabled: true : template: : customServiceConfig: | [DEFAULT] global_physnet_mtu=1558 [ml2] physical_network_mtus = ["datacentre:1500_{context}"] [ovn] ovn_emit_need_to_frag = true- Ensure that the MTU setting on every device on the external network is less than the internal MTU setting.
- Ensure that all tenant networks that use the OVN router have the same MTU.
To apply the changes to an existing router, delete the router and re-create it.
- Port updates delete QoS information for VLAN or flat network ports
Any VLAN or flat network port with egress QoS policy rules (maximum and/or minimum bandwidth) stores this information in the
Logical_Switch_Port. optionsdictionary. Any update on this port, from a port name change to a live migration, will delete this QoS information.Workaround: To restore the QoS information, you must remove the QoS policy for this port and set it again.
2.10.6. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.8 before you deploy the release.
2.10.6.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.8 before you deploy the release.
- Adoption fails when physical function is attached to a VM instance
When the physical function (PF) is attached to the instance, if
os-net-configis re-run,os-net-configcannot find the SR-IOV PF in the host, and thus the deployment, update, or adoption fails.Workaround: Before performing an adoption or network update, migrate the instances to another Compute host.
- NetworkManager-dispatcher scripts fail to run when SELinux is enabled
The
os-net-configconfiguration tool usesNetworkManager-dispatcherscripts for driver bindings. When SELinux is enabled, these scripts fail to run, and theos-net-confignetwork deployment fails.Workaround: There is currently no workaround.
2.10.7. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.8 before you deploy the release.
2.10.7.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.8 before you deploy the release.
- Failed service updates are being reflected accurately by the deployment status
Before this update, when updates to service configurations failed, the failure was not being reflected in the condition status of the deployment. Instead, the
Readycondition showed as "True" because the new pods created by the update were not being considered when checking the deployment readiness. With this update, any new pods created during a configuration update are now considered when assessing deployment readiness. If rolling out new pods fails, then the deployment reflects that it is stuck inDeployment in progress.
2.10.7.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.8 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine (VM) created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
2.10.8. Security and hardening Copy linkLink copied to clipboard!
Understand the Security and hardening updates introduced in RHOSO 18.0.8 before you deploy the release.
2.10.8.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.8 before you deploy the release.
- Generated CA bundle gets installed on data plane nodes
Before this update, the CA bundle that was generated by the RHOSO control plane was deployed on the data plane node for deployed or running services, but it did not get installed as the CA bundle on the data plane node itself. The CA bundle can include custom third-party CA files, for example, to access a satellite. With this update, the CA bundle gets installed on the data plane node.
2.11. Release information RHOSO 18.0.7 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.7 updates introduced in RHOSO 18.0.7 before you deploy the release.
Review the known issues, bug fixes, and other release notes for this release of Red Hat OpenStack Services on OpenShift.
RHOSO 18.0.7 introduces the Optimize service (watcher) to provide a flexible and scalable resource optimization service for multi-tenant RHOSO-based clouds. For more information about the Optimize service, see https://issues.redhat.com/browse/OSPRH-15037 and Optimizing infrastructure resource utilization.
2.11.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:4083
- Release of components for RHOSO 18.0.7
- RHBA-2025:4084
- Release of containers for RHOSO 18.0.7
- RHBA-2025:4085
- Data plane Operators for RHOSO 18.0.7
- RHBA-2025:4086
- Control plane Operators for RHOSO 18.0.7
2.11.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.7 before you deploy the release.
2.11.2.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.7 before you deploy the release.
- Compute service fails a
readycheck for a deployment with an invalid configuration Before this update, if the Compute service (nova) API raised a configuration error, it returned a 500 error once, and then continued to run with a broken configuration after a reload. This issue occurred because
mod_wsgireloaded the wsgi application into the same Python interpreter when an error was raised during application initialization. With this update, the Compute service has been modified to reraise the configuration error until the application can restart cleanly. Now, if you deploy with an invalid configuration, the Compute service API CR fails thereadycheck and updates the Status field in the OpenShift CR to prompt you to review the log files for the configuration error.
2.11.2.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.7 before you deploy the release.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restartingnova-computecauses all dedicated PCPUs on that host to be powered down, including ones used by instances. If thecpu_statestrategy is used, those instances' CPUs will become unpinned.
2.11.3. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.7 before you deploy the release.
2.11.3.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.7 before you deploy the release.
- Manually add the
redhatservice to the default list of data plane services The
redhatservice has been removed temporarily from the default list of data plane services. As a result, when you attach subscriptions or repositories to Compute nodes and use the documentedrhc_*parameters when creating the data plane secrets, the nodes are not registered and the data plane deployment fails.Workaround: Override the services list in your
OpenStackDataPlaneNodeSetCR, and ensure that you add theredhatservice as the first service in the list. You can copy the default list shown in Data plane services in Customizing the Red Hat OpenStack Services on OpenShift deployment.
2.11.4. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.7 before you deploy the release.
2.11.4.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.7 before you deploy the release.
- BFD now works as expected in RHOSO deployments with dynamic routing
Before this update, when you deployed RHOSO with Dynamic Routing with border gateway protocol (BGP), bi-directional forwarding (BFD) did not work as expected because there was no
nftrule to permit BFD and BGP ports. With this update, annftrule has been added and BFD works as expected:BGP - 179 tcp BFD - 3784 udp - 3785 udp - 4784 udp - 49152 udp - 49153 udp
2.11.4.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.7 before you deploy the release.
- No logs available for FRR service
No logs are available for the FRR service, which is deployed on the data plane nodes when RHOSO is configured to use Dynamic Routing with BGP.
Workaround:
To obtain FRR logs after the
OpenstackDataplaneDeploymentis complete, perform the following actions on all the networker and Compute nodes that are running FRR:-
Edit the
/var/lib/config-data/ansible-generated/frr/etc/frr/frr.conf`file and replace `log filewithlog file /var/log/frr/frr.log. -
Edit the
/var/lib/kolla/config_files/frr.jsonand replacesleep infinitywithtail -f /var/log/frr/frr.log. Restart FRR:
systemctl restart edpm_frr.
-
Edit the
- Legacy tripleo Networking services (neutron) after adoption
After the
edpm_tripleo_cleanuptask, there are still legacy tripleo Networking service (neutron) services. These services are stopped after adoption, so the RHOSO services are not affected.Workaround:
Perform the following steps to remove the legacy services manually:
-
Check tripleo neutron services list:
systemctl list-unit-files --type service Remove tripleo services from
/etc/systemd/system/
-
Check tripleo neutron services list:
- Packets silently dropped when external MTU is greater than internal MTU
RHOSO does not fragment north-south packets as expected when the external MTU is greater than the internal MTU. Instead, the ingress packets are dropped with no notification.
Also, fragmentation does not work on east/west traffic between tenant networks.
Until these issues are resolved, ensure that the external MTU settings are less than or equal to internal MTU settings, and that all MTU settings on east/west paths are equal.
Workaround:
Until these issues are resolved, perform the following steps to ensure that the external MTU settings are less than or equal to internal MTU settings, and that all MTU settings on east/west paths are equal.
-
Set
ovn_emit_need_to_fragtotrue. -
Set
global_physnet_mtuto a size that is at least 58 bytes larger than the external network MTU, to accommodate the geneve tunnel encapsulation overhead. -
Set
physical_network_mtusvalue pairs to describe the MTU of each physical network. - Ensure that the MTU setting on every device on the external network is less than the internal MTU setting.
- To apply the changes to an existing router, delete the router and re-create it.
-
Set
- Example
For example, suppose that the external network
datacentreMTU is 1500.Enter the following neutron settings in your OpenStackControlPlane CR:
neutron: enabled: true : template: : customServiceConfig: | [DEFAULT] global_physnet_mtu=1558 [ml2] physical_network_mtus = ["datacentre:1500_{context}"] [ovn] ovn_emit_need_to_frag = true- Ensure that the MTU setting on every device on the external network is less than the internal MTU setting.
- Ensure that all tenant networks that use the OVN router have the same MTU.
To apply the changes to an existing router, delete the router and re-create it.
- Port updates delete QoS information for VLAN or flat network ports
Any VLAN or flat network port with egress QoS policy rules (maximum and/or minimum bandwidth) stores this information in the
Logical_Switch_Port. optionsdictionary. Any update on this port, from a port name change to a live migration, will delete this QoS information.Workaround: To restore the QoS information, you must remove the QoS policy for this port and set it again.
2.11.5. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.7 before you deploy the release.
2.11.5.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.7 before you deploy the release.
- Fixes minor update failures starting with updates from 18.0.6
This update fixes a bug that causes minor update failures during updates from RHOSO 18.0.1 through 18.0.5 to 18.0.6 or later. The failure no longer occurs if you update from RHOSO 18.0.6 or later to any version.
ImportantIf you update from 18.0.1 through 18.0.5 to any version, the update fails because the
edpm_openstack_network_exporter.servicecannot be found. Before you perform these updates, you must perform the following workaround.Workaround: Add the telemetry service to the
servicesOverridefield in theopenstack-edpm-update-services.yamlfile before you update the `OpenStackDataplaneService`custom resource. For example:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: edpm-deployment-ipam-update-dataplane-services spec: nodeSets: - openstack-edpm-ipam servicesOverride: - telemetry - update
2.11.5.2. Deprecated functionality Copy linkLink copied to clipboard!
Understand the deprecated functionality introduced in RHOSO 18.0.7 before you deploy the release.
Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments.
- Deprecated the
edpm_ovs_dpdk_lcore_listvariable You can stop using the
edpm_ovs_dpdk_lcore_listAnsible variable in RHOSO deployments. Previously, it was used in nodeset CR definition files to enable OVS DPDK in data plane deployments in NFV environments. It is no longer required or supported, and its use now causes deployment errors.
2.11.5.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.7 before you deploy the release.
- Adoption fails when physical function is attached to a VM instance
When the physical function (PF) is attached to the instance, if
os-net-configis re-run,os-net-configcannot find the SR-IOV PF in the host, and thus the deployment, update, or adoption fails.
- NetworkManager-dispatcher scripts fail to run when SELinux is enabled
The
os-net-configconfiguration tool usesNetworkManager-dispatcherscripts for driver bindings. When SELinux is enabled, these scripts fail to run, and theos-net-confignetwork deployment fails.Workaround: There is currently no workaround.
2.11.6. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.7 before you deploy the release.
2.11.6.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.7 before you deploy the release.
TraceEnableparameter disabled by default in httpd configurationBefore this update, HTTP TRACE was enabled by default from the
OpenStackProvisionServerCR which resulted in security scanners creating an alert. With this update, theTraceEnableparameter has been set to the value "off" by default in the httpd configuration.
2.11.6.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.7 before you deploy the release.
- Control plane temporarily unavailable during minor update
During minor updates, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine (VM) created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.Workaround: To prevent this disruption, see the Red Hat Knowledgebase article How to enable mirrored queues in Red Hat Openstack Services on OpenShift.
2.11.7. Security and hardening Copy linkLink copied to clipboard!
Understand the Security and hardening updates introduced in RHOSO 18.0.7 before you deploy the release.
2.11.7.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.7 before you deploy the release.
- Generated CA bundle does not get installed on data plane nodes
The CA bundle that is generated by the RHOSO control plane gets deployed on the data plane node for deployed or running services, but it does not get installed as the CA bundle on the data plane node itself. The CA bundle can include custom third-party CA files, for example, to access a satellite. Workaround: There is currently no workaround.
2.11.8. Optimize Service Copy linkLink copied to clipboard!
Understand the Optimize Service updates introduced in RHOSO 18.0.7 before you deploy the release.
2.11.8.1. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.7 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Optimize service (watcher) for resource optimization
The Red Hat OpenStack Services on OpenShift (RHOSO) Optimize service (watcher) provides a flexible and scalable resource optimization service for multi-tenant RHOSO-based clouds. The Optimize service provides a framework to help you set and manage goals for infrastructure resource utilization.
The Optimize service is focused on helping users realize a wide range of infrastructure resource utilization goals toward reducing data center operating costs. It includes a metrics receiver, complex event processor and profiler, optimization processor, and an action plan applier.
This feature is currently being delivered in Technology Preview and supports a limited number of optimization strategies in this first release. For more information about the Optimize service, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/optimizing_infrastructure_resource_utilization/index.
The Optimize service in RHOSO, which was released as a Technology Preview in RHOSO 18.0.6, is now functional as a Technology Preview for the supported strategies in 18.0.7.
2.12. Release information RHOSO 18.0.6 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.6 updates introduced in RHOSO 18.0.6 before you deploy the release.
Review the known issues, bug fixes, and other release notes for this release of Red Hat OpenStack Services on OpenShift.
2.12.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:3029
- Release of components for RHOSO 18.0.6 (Feature Release 2)
- RHBA-2025:3030
- Data plane Operators for RHOSO 18.0.6 (Feature Release 2)
- RHBA-2025:3031
- Release of Operators for RHOSO 18.0.6 (Feature Release 2)
- RHBA-2025:3032
- Control plane Operators for RHOSO 18.0.6 (Feature Release 2)
- RHBA-2025:3033
- Release of containers for RHOSO 18.0.6 (Feature Release 2)
2.12.2. Observability Copy linkLink copied to clipboard!
Understand the Observability updates introduced in RHOSO 18.0.6 before you deploy the release.
2.12.2.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.6 before you deploy the release.
- Improved metrics for RHOSO Observability
You can now use new metrics for monitoring the health of RHOSO services, including the following:
-
kube_pod_status_phase -
kube_pod_status_ready -
node_systemd_unit_state -
podman_container_state podman_container_healthYou can use the
kube_pod_status_phaseandkube_pod_status_readyto monitor control plane services.-
kube_pod_status_phase- The relevant parameter isPhase, with values of Pending, Running, Succeeded, Failed, or Unknown, and corresponding Boolean values of1or0. kube_pod_status_ready- This metric also has Boolean values, with1indicating that the pod has all the containers running and readiness probes succeeding, and0indicating that the pod has not all the containers running or that the readiness probe did not succeed.You can use the
node_systemd_unit_stateto monitor the running state of data plane services.node_systemd_unit_state ` - The relevant parameter is `State, with values of activating, active, deactivating, failed, inactive, and corresponding Boolean values of1or0.You can use the
podman_container_stateandpodman_container_healthto monitor the health of data plane containerized services.-
podman_container_state- This metric can have the following values: -1=unknown, 0=created, 1=initialized, 2=running, 3=stopped, 4=paused, 5=exited, 6=removing, 7=stopping. podman_container_health- This metric can have the following values: -1=unknown, 0=healthy, 1=unhealthy, 2=starting.
-
- An additional Ceilometer metric is available
You can now retrieve the
VMs: ceilometer_power_statemetric to indicate thelibvirtpower state.
- Additional VM metrics are available
You can now use the dashboards to view a dedicated VM Network Traffic Dashboard, as well as monitor the power state of VMs.
- Visualize hardware sensor metrics with Ceilometer IPMI
You can now use the dashboards to view the available IPMI sensor hardware metrics from your compute nodes.
- Kepler dashboard is more user-friendly (Technology Preview)
You can now view Compute nodes by human-readable hostnames, instead of with Compute service UUIDs.
- Improved compatibility between Telemetry Operator and OpenShift Logging
You can now use Telemetry Operator with OpenShift Logging versions 6.1 and newer.
- Prometheus connection information is exposed in a secret
Telemetry Operator now creates a secret with the internal Prometheus connection details. Other OpenShift services can use that secret as a service discovery mechanism to connect to Prometheus.
2.12.2.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.6 before you deploy the release.
- Scrape Kepler metrics without manual intervention (Technology Preview)
Before this update, a firewall was not applied to Compute nodes, which allowed port 8888 to be open. However, access to port 8888 might be lost unexpectedly if a firewall is enabled. With this update, the ansible role checks whether the firewall is enabled and then opens port 8888. As a result, Prometheus can scrape Kepler metrics without manual intervention.
- Capture GPU metrics with Kepler (Technology Preview)
With this update to Red Hat OpenStack Services on OpenShift (RHOSO), you can now capture GPU metrics with Kepler.
- TLS errors caused node exporter scraping issues
This release of Red Hat OpenStack Services on OpenShift (RHOSO) fixes an issue with scraping metrics in specific dataplane configurations.
- Missing square brackets around IPv6 addresses
This release of Red Hat OpenStack Services on OpenShift (RHOSO) rectifies a potential issue with scraping data because of missing square brackets around IPv6 addresses.
- IPv6 connection refused with
RabbitMQmetrics With this update to Red Hat OpenStack Services on OpenShift (RHOSO), the
RabbitMQmetrics exporter now listens on the correct interface on the IPv6 Control Plane network.
2.12.3. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.6 before you deploy the release.
2.12.3.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.6 before you deploy the release.
- Unified limits introduced
This update introduces unified limits to RHOSO 18.0. Unified limits is a modern quota system in which quota limits are stored centrally in the Identity service. Unified limits may be enabled by following the documented procedure.
- Verify
systemd-containerpackage installation on hypervisors You can now verify that the
systemd-containerpackage is installed on the hypervisors before starting the final steps of the data plane adoption. You cannot adopt the source Red Hat OpenStack Platform cloud into Red Hat OpenStack Services on OpenShift (RHOSO) until the package is installed on all hypervisors.
- Periodic healing of Nova internal instance info cache is disabled
By default,
heal_instance_info_cache_intervalis now disabled to improve performance of the neutron API server by removing the load created by the nova-compute agent. This will not affect cache correctness, because it is updated during most VM operations.
- Adoption of nodes with hugepages now supported
With this update, data plane adoption now supports importing hypervisors with OSP workloads configured for use of hugepages.
- Nova enables the live migration of NVIDIA vGPU instances
Nova enables the live migration instances using vGPU resources between hosts if the target uses the same NVIDIA driver version and the same mediated types.
To live-migrate, operators need to modify the configuration for each of the hosts:
[libvirt] live_migration_completion_timeout = 0 live_migration_downtime = 500000 live_migration_downtime_steps = 3 live_migration_downtime_delay = 3
- Topology support for Compute service (nova) and placement service
Implemented a new custom resource definition for scheduling RHOSO Nova and Placement services' pods based on TopologySpreadConstraints and Affinity/Anti-Affinity rules.
- Fixed update failure related to nova_statedir_ownership.py
Before this fix, updates from RHOSO18.0.3 to later releases failed with errors related to the missing nova_statedir_ownership.py script. With this fix, updates from RHOSO 18.0.3 to RHOSO 18.0.6 (Feature Release 2) do not generate these errors.
2.12.3.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.6 before you deploy the release.
- Allocation candidates are spread between hosts in Placement GET /allocation_candidates query
In a deployment with wide and symmetric provider trees, for example, where there are multiple children providers under the same root having inventory from the same resource class, if the allocation candidate request asks for resources from those children resource providers in multiple request groups the number of possible allocation candidates grows rapidly. The Placement service generates these candidates fully before applying the limit parameter provided in the allocation candidate query. The Placement service takes excessive time and memory to generate this amount of allocation candidates and the client might time out.
To avoid request timeout or out of memory events, a new
[placement]max_allocation_candidatesconfiguration option is applied during the candidate generation process. By default, the[placement]max_allocation_candidatesoption is set to -1, which means there is no limit and which was the old behavior. Edit the value of this configuration option in affected deployments based on the memory available for the Placement service and the timeout setting of the clients. A suggested value is100000.If the number of generated allocation candidates is limited by the
[placement]max_allocation_candidatesconfiguration option, you can get candidates from a limited set of root providers, for example, Compute nodes, as the Placement service uses a depth-first strategy, generating all candidates from the first root before considering the next one. To avoid this, use the[placement]allocation_candidates_generation_strategyconfiguration option, which has two possible values:-
depth-first: generates all candidates from the first viable root provider before moving to the next. This is the default and this triggers the legacy behavior. breadth-first: generates candidates from viable roots in a round-robin fashion, creating one candidate from each viable root before creating the second candidate from the first root. This is the possible new behavior.In a deployment where
[placement]max_allocation_candidatesis configured to a positive number, set[placement]allocation_candidates_generation_strategytobreadth-first.
-
- Instances with ephemeral storage on NFS share continue working after Compute service restart
Before this update, Compute service (nova) instances with ephemeral storage on NFS shares stopped working as soon as the containerized Compute agent service restarted on the hypervisor host.
With this update, Nova Compute service instances with ephemeral storage on NFS shares no longer stop working. The Nova Compute init container is triggered every time an Openstack Dataplane Deployment is created with the Nova EDPM service that is included into the linked Openstack Dataplane Nodeset, and corrects the permissions of the
/var/lib/nova/directory contents on hypervisors.
- Fixed update failure related to nova_statedir_ownership.py
Before this fix, updates from RHOSO18.0.3 to later releases failed with errors related to the missing nova_statedir_ownership.py script. With this fix, updates from RHOSO 18.0.3 to RHOSO 18.0.6 (Feature Release 2) do not generate these errors.
- Fixed: Instances with ephemeral storage on NFS share stop working after Compute service restart
Before this update, Compute service (nova) instances with ephemeral storage on NFS shares stopped working as soon as the containerized Compute agent service restarted on the hypervisor host. That happened because of changed permissions of
/var/lib/nova/instances.This update fixes that bug.
2.12.3.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.6 before you deploy the release.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restarting nova-compute causes all dedicated PCPUs on that host to be powered down, including ones used by instances. If thecpu_statestrategy is used, those instances' CPUs will become unpinned.
- Block Storage service (cinder) known issue
When you are using Red Hat Ceph Storage as the back end for the Block Storage service (cinder), then you might be unable to extend an attached encrypted volume. Workaround: detach the encrypted RBD volume, extend this volume and then reattach it.
2.12.4. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.6 before you deploy the release.
2.12.4.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.6 before you deploy the release.
- Replace faulty nodes without scaling the cloud
With this update, you have the option of replacing faulty nodes without scaling the cloud.
-
For pre-provisioned nodes, set a new
ansibleHostfor the node in theOpenStackDataPlaneNodeSetCR. For provisioned nodes, delete the faulty bare metal host (BMH). The
OpenStackBaremetalSetCR is reconciled to provision a new available BMH and reset the deployment status of theOpenStackDataPlaneNodeSet, prompting you to create a newOpenStackDataPlaneDeploymentCR for deploying on the newly provisioned node.You must still manually clean up the removed nodes as with scale-in.
-
For pre-provisioned nodes, set a new
2.12.4.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.6 before you deploy the release.
- Documentation: Limitation added detailing maximum length of
OpenStackDataPlaneNodeSetCR names The description of the rules around naming`OpenStackDataPlaneNodeSet` CRs has been updated to include that the maximum length is 53 characters.
2.12.5. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.6 before you deploy the release.
2.12.5.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.6 before you deploy the release.
- Fixed default anti-affinity policy in the Load-balancing service
Before this update, the octavia operator did not enable anti-affinity when creating amphorae in active-standby topology. In some cases, the virtual machine were scheduled on the same compute node.
The release fixes this issue and ensures that anti-affinity is enabled.
- Corrected Load-balancing service provider network visibility
Before this update, end users could see the Load-balancing service provider network. Now the Load-balancing service provider network is only visible to the administrators.
- Deploy Load-balancing service in an offline cluster
Before this update, the container image URL of the
octavia-rsyslogpod was hard-coded and could not be overridden, with the result that users could not deploy the Load-balancing service (octavia) in an offline cluster.With this update, the container image URL can be overridden, and you can deploy the Load-balancing service offline.
- Fixed stability issue with Load-balancing service health manager in DCN mode
Before this update, when you ran Load-balancing service (octavia) health manager pods in DCN mode, pods were randomly restarted by the operator. With this update, the random restarts do not occur.
- Load-balancing service rsyslog endpoints no longer drop logs from remote area zones
Before this update, if you used the rsyslog service with DCN, the rsyslog pods dropped incoming rsyslog packets because the routes to the remote DCNs were missing. Now the packets are not dropped.
2.12.5.2. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.6 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Amphora Vertical Scaling (Threading/CPU pinning) (Technology Preview)
With this technology preview you can test an improved handling of Load-balancing service vertical scaling support for the amphora driver. The update in this technology preview use an additional amphora image that is specifically optimized for improved vertical scaling and an additional load balancer flavor that uses multiple vCPUs. This can help improve latency and throughput of the load balancer.
- TLS client authentication with the Load-balancing service (Technology Preview)
This update includes a Technology Preview of TLS web client communication with a RHOSO Load-balancing service (octavia) TLS-terminated HTTPS load balancer by using certificates to establish two-way TLS authentication.
- Load-balancing service (octavia) support for Distributed zones feature (Technology Preview)
This update introduces a Technology Preview of Load-balancing service (octavia) availability zones (AZs) that enable project users to create load balancers in a Distributed zones environment to increase traffic throughput and reduce latency.
- Multiple Load-balancing service VIP addresses from same network
There are use-cases where a Load-balancing service in Octavia with the Amphora provider needs multiple VIP addresses allocated from the same Neutron network. With this technology preview, you can test the ability to specify additional subnet_id/ip_address pairs to associate with the VIP port. This enables scenarios such as having a load balancer with both IPv4 and IPv6 addresses simultaneously or being exposed to both public and private subnets.
- Improved TLS cipher and protocol support (Technology Preview)
This update introduces a Technology Preview of improved Load-balancing service (octavia) support for TLS cipher and protocol. You can now override the default cipher list with values that are more appropriate for your site, as well as use additional new features such as setting cipher and protocol lists for each listener.
- IPv6 load-balancing network [Technology preview]
You can now test a technology preview that uses IPv6 CIDRs for the load-balancing management network.
2.12.5.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.6 before you deploy the release.
- No logs available for FRR service
No logs are available for the FRR service, which is deployed on the data plane nodes when RHOSO is configured to use Dynamic Routing with BGP.
Workaround: To obtain FRR logs after the
OpenstackDataplaneDeploymentis complete, perform the following actions on all the networker and Compute nodes that are running FRR:-
Edit the
/var/lib/config-data/ansible-generated/frr/etc/frr/frr.conf`file and replace `log filewithlog file /var/log/frr/frr.log. -
Edit the
/var/lib/kolla/config_files/frr.jsonand replacesleep infinitywithtail -f /var/log/frr/frr.log. Restart FRR:
systemctl restart edpm_frr.
-
Edit the
- Legacy tripleo Networking services (neutron) after adoption
After the
edpm_tripleo_cleanuptask, there are still legacy tripleo Networking service (neutron) services. These services are stopped after adoption, so the RHOSO services are not affected.Workaround: Perform the following steps to remove the legacy services manually:
-
Check tripleo neutron services list:
systemctl list-unit-files --type service Remove tripleo services from
/etc/systemd/system/
-
Check tripleo neutron services list:
- Packets silently dropped when external MTU is greater than internal MTU
RHOSO does not fragment north-south packets as expected when the external MTU is greater than the internal MTU. Instead, if the ingress packets are dropped with no notification.
Also, fragmentation does not work on east/west traffic between tenant networks.
Until these issues are resolved, ensure that the external MTU settings are less than or equal to internal MTU settings, and that all MTU settings on east/west paths are equal.
Procedure:
-
Set
ovn_emit_need_to_fragtotrue. -
Set
global_physnet_mtuto a size that is at least 58 bytes larger than the external network MTU, to accommodate the geneve tunnel encapsulation overhead. -
Set
physical_network_mtusvalue pairs to describe the MTU of each physical network. - Ensure that the MTU setting on every device on the external network is less than the internal MTU setting.
- To apply the changes to an existing router, delete the router and re-create it.
-
Set
- Example
For example, suppose that the external network
datacentreMTU is 1500.Enter the following neutron settings in your OpenStackControlPlane CR:
neutron: enabled: true : template: : customServiceConfig: | [DEFAULT] global_physnet_mtu=1558 [ml2] physical_network_mtus = ["datacentre:1500_{context}"] [ovn] ovn_emit_need_to_frag = true- Ensure that the MTU setting on on every device on the external network is less than the internal MTU setting.
- Ensure that all tenant networks that use the OVN router have the same MTU.
To apply the changes to an existing router, delete the router and re-create it.
- BFD does not work as expected in RHOSO deployments with dynamic routing; workaround required
When you deploy RHOSO with Dynamic Routing with border gateway protocol (BGP), bi-directional forwarding (BFD) does not work as expected.
Workaround: Add NFT rules to the OpenstackDataplaneNoteSet CRs. There are two ways to do this. Choose one.
-
Disable BFD by setting
edpm_frr_bfdtofalse. Configure
edpm_nftables_user_rulesto allow BFD traffic:edpm_nftables_user_rules: | - rule_name: 121 frr bgp port rule: proto: tcp dport: - 179 - rule_name: 122 frr bfd ports rule: proto: udp dport: - 3784 - 3785 - 4784 - 49152 - 49153 state: ["UNTRACKED_{context}"]
-
Disable BFD by setting
- QoS policies not enforced when only maximum bandwidth (egress) rules present, on ports in physical networks, when the physical interface is a bond
A port connected to a physical network (VLAN, flat), with maximum-bandwidth-only QoS rules and egress direction uses the physical network interface to enforce the QoS rule, via TC commands.
In previous versions, Neutron enforced the bandwidth limit rule using the OVN policer, regardless of the network type and rule direction.
Now, starting with RHOSO 18.0.6, if the environment uses a bond to connect the physical bridge to the physical network, there will be no QoS enforcement. For more information, see https://issues.redhat.com/browse/OSPRH-18010.
2.12.6. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.6 before you deploy the release.
2.12.6.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.6 before you deploy the release.
- Change
os-net-configprovider tonmstate In previous RHOSO releases, Red Hat did not support
NMstateas theos-net-config provider. It is now supported but the default configuration sets theos-net-configprovider toifcfg.The parameter is
edpm_network_config_nmstate. The default value isfalse. Change it totrueto use thenmstateprovider unless a specific limitation of thenmstateprovider requires you to use theifcfgprovider.For more information, see "The nmstate provider for os-net-config" in the guide Planning your deployment
2.12.6.2. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.6 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- TSO for OVS-DPDK (Technology Preview)
RHOSO 18.0.6 (Feature Release 2) introduces a technology preview of TCP segmentation offload (TSO) for RHOSO environments with OVS-DPDK.
For more information, see OVS-DPDK with TCP segmentation offload (Technology Preview) in Deploying a network functions virtualization environment (https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_network_functions_virtualization_environment/plan-ovs-dpdk-deploy_rhoso-nfv#ovsdpdk-tso_plndpdk-nfv).
2.12.6.3. Deprecated functionality Copy linkLink copied to clipboard!
Understand the deprecated functionality introduced in RHOSO 18.0.6 before you deploy the release.
Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments.
- Deprecated the
edpm_ovs_dpdk_lcore_listvariable Stop using the
edpm_ovs_dpdk_lcore_listAnsible variable in RHOSO deployments. Previously, it was used in nodeset CR definition files to enable OVS DPDK in data plane deployments in NFV environments. It is no longer required or supported. Its use now causes deployment errors.
2.12.6.4. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.6 before you deploy the release.
- Adoption fails when physical function is attached to a VM instance
When the physical function (PF) is attached to the instance, if
os-net-configis re-run,os-net-configcannot find the SR-IOV PF in the host, and thus the deployment/update/adoption fails.
- Cannot set
no_turbowhen using cpu-partitioning-powersave profile Due to an issue with setting the
no_turboparameter in the kernel, tuned hangs and fails when using the cpu-partitioning-powersave profile.Workaround: Downgrade tuned as part of the deployment to an older version by adding the following configuration to
edpm_bootstrap_command:... edpm_bootstrap_command: |- ... dnf downgrade tuned-2.24.0 …
- Requested service cannot be found during minor update
When updating the remaining services on the data plane, a minor update from 18.0.3 to 18.0.6 fails because the
edpm_openstack_network_exporter.servicecannot be found.Workaround: Add the telemetry service to the
servicesOverridefield in theopenstack-edpm-update-services.yamlfile before you update the `OpenStackDataplaneService`custom resource. For example:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: edpm-deployment-ipam-update-dataplane-services spec: nodeSets: - openstack-edpm-ipam servicesOverride: - telemetry - update
2.12.7. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.6 before you deploy the release.
2.12.7.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.6 before you deploy the release.
- Manage service Operator installation under single OLM bundle
The OpenStack Operator no longer installs the multiple RHOSO service Operators individually. Instead, a new initialization resource manages the installation of the service Operators under a single Operator Lifecycle Manager (OLM) bundle. For more information about the new installation method, see Installing and preparing the Operators.
- Custom environment variables for the
OpenStackClientpod You can set custom environment variables for the
OpenStackClientpod.
2.12.7.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.6 before you deploy the release.
- Control plane temporarily unavailable during minor update
During the minor update to 18.0 Feature Release 1, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine (VM) created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.
2.12.8. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.6 before you deploy the release.
2.12.8.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.6 before you deploy the release.
- Enhanced Block Storage volume restoration on thinly provisioned back ends
This enhancement optimizes the process of restoring Block Storage volume backups on any thinly provisioned back end. Previously, when restoring a backup on a thinly provisioned back end, the full volume size was restored instead of only restoring the portion of the volume that was used. This caused unnecessary network traffic and greatly increased the time taken by the restoration process. This enhancement ensures that when restoring a volume on a thinly provisioned back end, only the portion of the volume that was used is restored.
- Red Hat Ceph Storage 8 support
This enhancement adds support for integration with external Red Hat Ceph Storage 8. Due to known issues, not all Red Hat Ceph Storage 8 functionality is supported. For more information about these issues, see the Known Issues section.
2.12.8.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.6 before you deploy the release.
- ExtraMounts can process a per-instance propagation when the pod is prefixed with an arbitrary name
When
uniquePodNamesistrue, every Cinder pod (and in general each component and service) is prefixed by a pseudo-random string. With this update, ExtraMounts can process a per-instance propagation when the pod is prefixed with an arbitrary name.
2.12.8.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.6 before you deploy the release.
- Multipart image upload does not work with S3 back end
If you upload multipart images with an S3 back end, you must use the import workflow.
- Red Hat Ceph Storage 8 NFS is not supported
In RHOSO 18.0.6, NFS is currently not supported when integrating with Red Hat Ceph Storage 8.
Workaround: There is no current workaround.
- Red Hat Ceph Storage 8 Object Gateway is not supported
In RHOSO 18.0.6, the Red Hat Ceph Storage Object Gateway (RGW) is currently not supported when integrating with Red Hat Ceph Storage 8.
Workaround: There is no current workaround.
2.12.9. Upgrades and updates Copy linkLink copied to clipboard!
Understand the Upgrades and updates updates introduced in RHOSO 18.0.6 before you deploy the release.
2.12.9.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.6 before you deploy the release.
- Create instance of openstack during minor update
If you update your Red Hat OpenStack Services on OpenShift environment from any release before 18.0.6, you must create an instance of
openstackafter you update theopenstack-operatorto trigger the deployment of all operators. For example:cat > openstack-init.yaml <<'EOF' --- apiVersion: operator.openstack.org/v1beta1 kind: OpenStack metadata: name: openstack namespace: openstack-operators EOF $ oc apply -f ./openstack-init.yaml
2.13. Release information RHOSO 18.0.4 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.4 updates introduced in RHOSO 18.0.4 before you deploy the release.
Review the known issues, bug fixes, and other release notes for this release of Red Hat OpenStack Services on OpenShift.
2.13.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2025:0435
- Release of components for RHOSO 18.0.4
- RHBA-2025:0436
- Release of containers for RHOSO 18.0.4
- RHBA-2025:0437
- Control plane Operators for RHOSO 18.0.4
- RHBA-2025:0438
- Data plane Operators for RHOSO 18.0.4
- RHSA-2025:0439
- Moderate: Red Hat OpenStack Platform 18.0.4 (openstack-ironic) security update
2.13.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.4 before you deploy the release.
2.13.2.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.4 before you deploy the release.
- Boot instances from images with hardware architecture properties
Before this update, you could not boot an instance from an image that had
hw_architectureorhw_emulation_architectureproperties. With this update, you can boot instances from images that havehw_architectureandhw_emulation_architectureproperties.
2.13.2.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.4 before you deploy the release.
- Instances with ephemeral storage on NFS share stop working after Compute service restart
Compute service (nova) instances with ephemeral storage on NFS shares stop working as soon as the containerized Compute agent service restarts on the hypervisor host. That happens because of changed permissions of
/var/lib/nova/instances.Workaround: Manually restore permissions to the original values and avoid the service restarts.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restarting nova-compute causes all dedicated PCPUs on that host to be powered down, including ones used by instances. If thecpu_statestrategy is used, those instances' CPUs will become unpinned.
- Cold migration fails for server with
swapin flavor for shared storage like NFS When the Compute service (nova) is enabled with shared storage, for example, NFS, cold migration fails if the instance uses the
FLAVOR_SWAPflavor.
2.13.3. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.4 before you deploy the release.
2.13.3.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.4 before you deploy the release.
- Ansible task writes the
registries.conffile successfully in disconnected deployments Before this update, the
registries.conffile failed in disconnected deployments because the Ansible task attempted to use thetemplatemodule and the raw string input as thesrc. With this update, the Ansible task writes theregistries.conffile successfully because it uses theansible.builtin.copymodule with thecontentparameter.
iscsi-starter.servicedisabled on EDPM nodes after a rebootBefore this update, the
iscsi.servicestarted after a reboot of EDPM nodes that run an instance with an iSCSI-backed volume, even thoughiscsi.servicewas not enabled inedpm-ansible. This issue occurred because theiscsi-starter.servicewas enabled in the EDPM node’s image. With this update, theiscsi-starter.serviceis disabled on EDPM nodes to prevent the issue.
- Documentation: Specify a pool to register nodes if you have multiple Red Hat subscriptions
Before this update, there was an optional command missing to specify a pool when you registered nodes for your RHOSO deployment in the procedures for creating the
OpenStackDataPlaneNodeSetCR with pre-provisioned nodes or unprovisioned nodes. The missing command could cause a registration error when deploying the data plane or Compute nodes in RHOSO if you had multiple Red Hat subscriptions.With this update, the optional command to specify a pool if you have multiple Red Hat subscriptions is included in these procedures.
2.13.4. Hardware Provisioning Copy linkLink copied to clipboard!
Understand the Hardware Provisioning updates introduced in RHOSO 18.0.4 before you deploy the release.
2.13.4.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.4 before you deploy the release.
- New
default_network_interfaceparameter for Bare Metal service (ironic) Before this update, if the Bare Metal service
network_interfacewas not configured during ControlPlane deployment, RHOSO configured it as a no-op.With this update, RHOSO set the
default_network_interfaceparameter to a default value undercustomServiceConfig / [DEFAULT]
2.13.5. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.4 before you deploy the release.
2.13.5.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.4 before you deploy the release.
- Adds support of dynamic routing without DVR
Previously, you could not use dynamic routing on the data plane with Free Range Routing (FRR) and Border Gateway Protocol (BGP) unless you also used distributed virtual routing (DVR). Now you can use dynamic routing without enabling DVR.
- Update to latest RHOSP 17.1 version before adopting
When performing adoption of a source environment which is older than RHOSP 17.1.4, the workloads experience a prolonged network connectivity disruption. Make sure to update the source environment at least to RHOSP 17.1.4 before adopting.
- Corrected default value for
createDefaultLbMgmtNetworkandmanageLbMgmtNetworkswhen availability zones are defined Before this update,
createDefaultLbMgmtNetworkandmanageLbMgmtNetworkswere incorrectly set tofalsewhen availability zones were defined.With this update,
createDefaultLbMgmtNetworkandmanageLbMgmtNetworksare set totruewhen availability zones are defined.
- Documentation: Set
replicas: 3for ovndbcluster fields Previously, CR examples throughout the RHOSO documentation did not consistently represent that to support OVN database high availability, you must set
replicas: 3in theovndbcluster-nbandovndbcluster-sbfields for every control plane CR that includes the OVN spec.With this update, all CR examples now include the replicas requirement. The following example shows an excerpt from the CR example with the added section:
ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 <<---------- dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replcas: 3 <<---------- dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {}
2.13.5.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.4 before you deploy the release.
- octavia-operator does not enable anti affinity
The Load-balancing service (octavia) does not currently configure anti-affinity settings in the Compute service (nova) to prevent Amphora VMs from being scheduled to the same Compute node.
Workaround: Add the relevant setting to the Load-balancing service by using the
customServiceConfigparameter, as shown in the following example:- Example
# names will be dependent on the deployment oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch ' spec: octavia: template: octaviaHousekeeping: customServiceConfig: | [nova] enable_anti_affinity = true octaviaWorker: customServiceConfig: | [nova] enable_anti_affinity = true octaviaHealthManager: customServiceConfig: | [nova] enable_anti_affinity = true '
- Adoption not supported with Load-balancing service (octavia) agents on networker nodes
Adoption of deployments that have Load-balancing service agents deployed on networker nodes is currently not supported.
- Legacy tripleo Networking services (neutron) after adoption
After the
edpm_tripleo_cleanuptask, there are still legacy tripleo Networking service (neutron) services. These services are stopped after adoption, so the RHOSO services are not affected.Workaround: Perform the following steps to remove the legacy services manually:
-
Check tripleo neutron services list:
systemctl list-unit-files --type service Remove tripleo services from
/etc/systemd/system/
-
Check tripleo neutron services list:
- Packets silently dropped when external MTU is greater than internal MTU
RHOSO does not fragment north-south packets as expected when the external MTU is greater than the internal MTU. Instead, if the ingress packets are dropped with no notification.
Also, fragmentation does not work on east/west traffic between tenant networks.
Until these issues are resolved, ensure that the external MTU settings are less than or equal to internal MTU settings, and that all MTU settings on east/west paths are equal.
Procedure:
-
Set
ovn_emit_need_to_fragtotrue. -
Set
global_physnet_mtuto a size that is at least 58 bytes larger than the external network MTU, to accommodate the geneve tunnel encapsulation overhead. -
Set
physical_network_mtusvalue pairs to describe the MTU of each physical network. - Ensure that the MTU setting on every device on the external network is less than the internal MTU setting.
- To apply the changes to an existing router, delete the router and re-create it.
-
Set
- Example
For example, suppose that the external network
datacentreMTU is 1500.Enter the following neutron settings in your OpenStackControlPlane CR:
neutron: enabled: true : template: : customServiceConfig: | [DEFAULT] global_physnet_mtu=1558 [ml2] physical_network_mtus = ["datacentre:1500_{context}"] [ovn] ovn_emit_need_to_frag = true- Ensure that the MTU setting on every device on the external network is less than the internal MTU setting.
- Ensure that all tenant networks that use the OVN router have the same MTU.
To apply the changes to an existing router, delete the router and re-create it.
2.13.6. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.4 before you deploy the release.
2.13.6.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.4 before you deploy the release.
- Do not use virtual functions (VF) for the RHOSO control plane interface
This RHOSO release does not support the use of VFs for the RHOSO control plane interface.
- Verify that the
os-net-configprovider isifcfgfor any production deployment Red Hat does not currently support
NMstateas theos-net-config provider. Ensure that you have the settingedpm_network_config_nmstate: false, which is the default. This ensures that your environment uses theifcfgprovider.
2.13.7. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.4 before you deploy the release.
2.13.7.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.4 before you deploy the release.
- Control plane temporarily unavailable during minor update
During the minor update to 18.0 Feature Release 1, the RHOSO control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine (VM) created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.
2.13.8. Security and hardening Copy linkLink copied to clipboard!
Understand the Security and hardening updates introduced in RHOSO 18.0.4 before you deploy the release.
2.13.8.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.4 before you deploy the release.
- Custom configuration support for Key Manager (barbican) services
Before this update, there was an issue in
common_types.go, preventing thecustomServiceConfigfield in the custom resource definition (CRD) for the Key Manager service from being applied correctly. With this update, the issue has been resolved, allowing custom configuration to be correctly generated and applied.
2.13.9. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.4 before you deploy the release.
2.13.9.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.4 before you deploy the release.
extraMountspropagation to instance does not work whenuniquePodNamesistrueWhen
uniquePodNamesistrue, every Cinder pod (and in general each component and service) is prefixed by a pseudo-random string. This affects the per-instance propagation, because the legacy method, based onstrings.TrimPrefix, is not valid anymore.In a DCN deployment, propagate secrets to pods by matching the instance AZ name.
Example 1 results in pods with names that match az0 getting the secret ceph-conf-az-0, pods with names that match az1 getting the secret ceph-conf-az-0, and so on. Example 1 works for Glance pods but only works for Cinder pods if
uniquePodNamesisfalse.Workaround: Set
uniquePodNamestofalseas shown in Example 2, until this issue is resolved. TheuniquePodNamessetting is only required if the storage back end uses NFS.- Example 1
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph0 readOnly: true propagation: - az0 volumes: - name: ceph0 projected: sources: - secret: name: ceph-conf-az-0 - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph1 readOnly: true propagation: - az1 volumes: - name: ceph1 projected: sources: - secret: name: ceph-conf-az-1- Example 2
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane <...> spec: cinder: uniquePodNames: false # workaround https://issues.redhat.com/browse/OSPRH-11240 enabled: true apiOverride: <...>
2.14. Release information RHOSO 18.0.3 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.3 updates introduced in RHOSO 18.0.3 before you deploy the release.
Review the known issues, bug fixes, and other release notes for this release of Red Hat OpenStack Services on OpenShift.
2.14.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2024:9480
- Release of components for RHOSO 18.0.3 (Feature Release 1)
- RHSA-2024:9481
- Moderate: Red Hat OpenStack Platform 18.0.3 (python-django) security update
- RHBA-2024:9482
- Release of containers for RHOSO 18.0.3 (Feature Release 1)
- RHBA-2024:9483
- Data plane Operators for RHOSO 18.0.3 (Feature Release 1)
- RHBA-2024:9484
- Release of operators for RHOSO 18.0.3 (Feature Release 1)
- RHSA-2024:9485
- Important: Control plane Operators for RHOSO 18.0.3 (Feature Release 1) security update
- RHBA-2024:9486
- Control plane Operators for RHOSO 18.0.3 (Feature Release 1)
2.14.2. Observability Copy linkLink copied to clipboard!
Understand the Observability updates introduced in RHOSO 18.0.3 before you deploy the release.
2.14.2.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.3 before you deploy the release.
- RabbitMQ metrics now in Prometheus
With this update, RabbitMQ metrics are collected and stored in Prometheus. A new dashboard for displaying these metrics was added.
- Autoscaling improvements
Autoscaling has been updated to use the server_group metadata. This improves the stability of the autoscaling feature. For more information, see Autoscaling for instances
2.14.2.2. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.3 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- VM power usage monitoring (Technology Preview)
With the integration of the kepler component, youi can expose the power usage of VM instances in a dashboard.
2.14.3. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.3 before you deploy the release.
2.14.3.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.3 before you deploy the release.
- vGPUs enablement
This update introduces enhancements for mdev and vGPU.
2.14.3.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.3 before you deploy the release.
- NUMA resource tracking works correctly
With this release, a bug that causes NUMA resource tracking issues has been fixed. Previously, Libvirt reported all powered down CPUs on NUMA node 0 instead of on the correct NUMA node. Now, Nova caches the correct CPU topology before powering down any CPUs, fixing the resource tracking issues.
2.14.3.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.3 before you deploy the release.
- Setting
hw-architectureorarchitectureon Image service (glance) image does not work as expected In RHOSO 18.0, the image metadata prefilter is enabled by default. RHOSO does not support emulation of non-native architectures. As part of the introduction of emulation support upstream, the image metadata prefilter was enhanced to support the scheduling of instances based on the declared VM architecture, for example,
hw_architecture=x86_64.When nova was enhanced to support emulating non-native architecture by using image properties, a bug was introduced, because the native architecture was not reported as a trait by the virt driver.
Therefore, by default, support for setting
hw_architectureorarchitectureon an image was rendered inoperable.Workaround: To mitigate this bug, perform one of the following tasks:
-
Unset the
architecture/hw_architectureimage property. RHOSO supports only one architecture, x86_64. There is no valid use case that requires this to be set for an RHOSO cloud, so all hosts will be x86_64. Disable the image metadata prefilter in the
CustomServiceConfigsection of the nova scheduler:[scheduler] image_metadata_prefilter=false
-
Unset the
- Instances with ephemeral storage on NFS share stop working after Compute service restart
Compute service (nova) instances with ephemeral storage on NFS shares stop working as soon as the containerized Compute agent service restarts on the hypervisor host. That happens because of changed permissions of /var/lib/nova/instances.
Workaround: Manually restore permissions to the original values and avoid the service restarts.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis currently unsupported. Restarting nova-compute causes all dedicated PCPUs on that host to be powered down, including ones used by instances. if thecpu_statestrategy is used, those instances' CPUs will become unpinned.
2.14.4. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.3 before you deploy the release.
2.14.4.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.3 before you deploy the release.
- OpenStackAnsibleEE custom resource replaced with functionality in openstack-operator
Enhancement: The OpenStackAnsibleEE custom resource has been removed along with the openstack-ansibleee-operator. This functionality has been integrated into the openstack-operator to allow the direct creation of Kubernetes jobs without the unnecessary abstraction provided by the additional operator and associated custom resource.
Reason: The additional abstraction was unnecessary. This change reduces the amount of code that we need to maintain, along with reducing the number of CRD’s and operators running in the cluster.
Result: Users can expect that there will no longer be any OpenStackAnsibleEE resources created when they deploy dataplane nodes. Instead, they will just see Kubernetes Jobs.
Existing OpenStackAnsibleEE resources will remain in the cluster for posterity, or if users no longer require them for historical reference, they can be deleted. Documentation is provided to cleanup unnecessary resources and operators.
2.14.5. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.3 before you deploy the release.
2.14.5.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.3 before you deploy the release.
- Dynamic routing on data plane with FRR and BGP
This update introduces support of Free Range Routing (FRR) border gateway protocol (BGP) to provide dynamic routing capabilities on the RHOSO data plane.
Limitations:
- If you use dynamic routing, you must also use distributed virtual routing (DVR).
- If you use dynamic routing, you also use dedicated networker nodes.
You can not use dynamic routing in an IPv6 deployment or a deployment that uses the Load-balancing service (octavia).
2.14.5.2. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.3 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Custom ML2 mechanism driver and SDN back end support (Technology Preview)
This update introduces a Technology Preview of the ability to integrate the Networking service (neutron) with a custom ML2 mechanism driver and software defined networking (SDN) back end components, instead of the default OVN mechanism driver and back end components.
2.14.5.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.3 before you deploy the release.
- Update to latest RHOSP 17.1 version before adopting
When performing adoption of a source environment which is older than RHOSP 17.1.4, the workloads experience a prolonged network connectivity disruption. Make sure to update the source environment at least to RHOSP 17.1.4 before adopting.
- octavia-operator does not enable anti affinity
The Load-balancing service (octavia) does not currently configure anti-affinity settings in Nova to prevent amphora VMs from being scheduled to the same compute node.
Workaround: Add the relevant setting to octavia through customConfig, as shown in the following example:
- Example
# names will be dependent on the deployment oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch ' spec: octavia: template: octaviaHousekeeping: customServiceConfig: | [nova] enable_anti_affinity = true octaviaWorker: customServiceConfig: | [nova] enable_anti_affinity = true octaviaHealthManager: customServiceConfig: | [nova] enable_anti_affinity = true '
- Adoption not supported with Load-balancing service (octavia) agents on networker nodes
Adoption of deployments that have Load-balancing service agents deployed on networker nodes is currently not supported.
createDefaultLbMgmtNetworkandmanageLbMgmtNetworksset tofalsewhen availability zones are definedWhen setting a list of availability zones in the Octavia CR (in
spec.lbMgmtNetwork.availabilityZones), the default values of thespec.lbMgmtNetwork.createDefaultLbMgmtNetworkandspec.lbMgmtNetwork.manageLbMgmtNetworkssettings are incorrectly reset tofalse.Workaround: When setting
availabilityZonesto a non-empty list inspec.lbMgmtNetwork, explicity setcreateDefaultLbMgmtNetworkandmanageLbMgmtNetworks to `true.
- Adoption of combined Controller/Networker nodes not verified
Red Hat has not verified a process for adoption of a RHOSP 17.1 environment where Controller and Networker roles are composed together on Controller nodes. If your RHOSP 17.1 environment does use combined Controller/Networker roles on the Controller nodes, the documented adoption process will not produce the expected results.
Adoption of RHOSP 17.1 environments that use dedicated Networker nodes has been verified to work as documented.
- Legacy tripleo Networking services (neutron) after adoption
After edpm_tripleo_cleanup task, there are still legacy tripleo Networking service (neutron) services. These services are stopped after adoption, so the RHOSO services are not affected.
Workaround: Perform the following steps to remove the legacy services manually:
- Check tripleo neutron services list: systemctl list-unit-files --type service - Remove tripleo services from: /etc/systemd/system/
2.14.6. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.3 before you deploy the release.
2.14.6.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.3 before you deploy the release.
- Do not use virtual functions (VF) for the RHOSO control plane interface
This RHOSO release does not support use of VFs for the RHOSO control plane interface.
- Verify that the
os-net-configprovider `ifcfg`for any production deployment Red Hat does not currently support
NMstateas theos-net-config provider.Ensure that you have the setting
edpm_network_config_nmstate: false, which is the default. This ensures that your environment uses theifcfgprovider.
2.14.7. Control plane Copy linkLink copied to clipboard!
Understand the Control plane updates introduced in RHOSO 18.0.3 before you deploy the release.
2.14.7.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.3 before you deploy the release.
- Control plane temporarily unavailable during minor update
During the minor update to 18.0 Feature Release 1, the Red Hat OpenStack Platform control plane temporarily becomes unavailable. API requests might fail with HTTP error codes, such as error 500. Alternatively, the API requests might succeed but the underlying life cycle operation fails. For example, a virtual machine (VM) created with the
openstack server createcommand during the minor update never reaches theACTIVEstate. The control plane outage is temporary and automatically recovers after the minor update is finished. The control plane outage does not affect the already running workload.
2.14.8. High availability Copy linkLink copied to clipboard!
Understand the High availability updates introduced in RHOSO 18.0.3 before you deploy the release.
2.14.8.1. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.3 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Instance high availability
RHOSO 18.0.3 (Feature Release 1) introduces a technology preview of instance high availability (instance HA). With instance HA, RHOSO can automatically evacuate and re-create instances on a different Compute node when a Compute node fails.
To use the instance HA technology preview in a test environment, see https://access.redhat.com/articles/7094761.
Do not use this technology preview in a production environment.
2.14.8.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.3 before you deploy the release.
- Possible database error before adoption
You might see a database error for system table
mysql.procif you runmysqlcheckbefore adopting the OSP database.[...] mysql.plugin OK mysql.proc Needs upgrade mysql.procs_priv OK [...]This error message is harmless and results from a system table’s redo log that was not replicated correctly when the galera cluster was bootstrapped.
Workaround: You can remove the error by repairing the
mysql.procsystem table:- Example
oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -h $SOURCE_MARIADB_IP -u root -p"$SOURCE_DB_ROOT_PASSWORD" -e "repair table mysql.proc;"- Example output
+------------+--------+----------+---------------------------------+ | Table | Op | Msg_type | Msg_text | +------------+--------+----------+---------------------------------+ | mysql.proc | repair | info | Running zerofill on moved table | | mysql.proc | repair | status | OK | +------------+--------+----------+---------------------------------+The table and its redo log are fixed and replicated across all galera nodes. Re-run the mysqlcheck and continue the adoption procedure.
2.14.9. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.3 before you deploy the release.
2.14.9.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.3 before you deploy the release.
- Decommission a
manilaShareback end You can now decommission a
manilaShareback end from RHOSO. When you delete amanila-share, a clean-up job runs to clean up the service list for the Shared File Systems service (manila). The output of theopenstack share pool listcommand does not reflect storage pool changes. To update and display the latest statistics, you must restart the scheduler service. Perform the restart during scheduled downtime because it causes a minor disruption.
- Rebuilding volume-backed server
This release adds the support to rebuild a volume-backed server with the same or different image.
- Migrate director-deployed Ceph cluster to external Ceph cluster
With this update, after adoption from RHOSP 17.1 to a RHOSO 18.0 data plane, you can migrate a Director deployed Ceph cluster and turn it into an external Ceph cluster. The Ceph daemons deployed on the Controller nodes are migrated to a set of target nodes.
- Shared File Systems service (manila) support for VAST Data Platform
The Shared File Systems service now includes a storage driver to support VAST Data Platform. The driver allows provisioning and management of NFS shares and point-in-time backups through snapshots.
- Block Storage service (cinder) volume deletion
With this release, the Block Storage service RBD driver takes advantage of recent Ceph developments to allow RBD volumes to meet normal volume deletion expectations.
In previous releases, when the Block Storage service used an RBD (Ceph) volume back end, it was not always possible to delete a volume.
2.14.9.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.3 before you deploy the release.
extraMountspropagation to instance does not work whenuniquePodNamesistrueWhen
uniquePodNamesistrue, every Cinder Pod (and in general each component and service) is prefixed by a pseudo-random string. This affects the per-instance propagation, because the legacy method, based onstrings.TrimPrefix, is not valid anymore.In a DCN deployment, Red Hat recommends propagating secrets to pods by matching the instance AZ name.
Example 1 results in pods whose names match az0 getting the secret ceph-conf-az-0, pods whose names match az1 getting the secret ceph-conf-az-0, and so on. Example 1 works for Glance pods but only works for Cinder pods if
uniquePodNamesisfalse.Workaround: Set
uniquePodNamesto false as shown in Example 2, until this bug is resolved. TheuniquePodNamessetting is only needed if the storage backend uses NFS.- Example 1
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph0 readOnly: true propagation: - az0 volumes: - name: ceph0 projected: sources: - secret: name: ceph-conf-az-0 - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph1 readOnly: true propagation: - az1 volumes: - name: ceph1 projected: sources: - secret: name: ceph-conf-az-1- Example 2
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane <...> spec: cinder: uniquePodNames: false # workaround https://issues.redhat.com/browse/OSPRH-11240 enabled: true apiOverride: <...>
2.14.10. Upgrades and updates Copy linkLink copied to clipboard!
Understand the Upgrades and updates updates introduced in RHOSO 18.0.3 before you deploy the release.
2.14.10.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.3 before you deploy the release.
os-difftool identifies differences between source 17.1 and adopted RHOSO environmentsRHOSO ships the
os-diff tool, which can help the operator find differences between the source RHOSP 17.1 environment configuration and the adopted RHOSO environment configuration.
- Baremetal adoption
You can now adopt baremetal RHOSP 17.1 environments into RHOSO environments.
- Adoption roll-back
You can now roll back a failed adoption of a RHOSP 17.1 control plane.
2.14.10.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.3 before you deploy the release.
- Erroneous error messages no longer generated
Previously, the Keystone services and endpoints clean up step in the adoption procedure generated false errors when some services weren’t deployed in the source cloud. The false errors are not generated anymore.
2.15. Release information RHOSO 18.0.2 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.2 updates introduced in RHOSO 18.0.2 before you deploy the release.
Review the known issues, bug fixes, and other release notes for this release of Red Hat OpenStack Services on OpenShift.
2.15.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2024:8151
- Release of containers for RHOSO 18.0.2
- RHBA-2024:8152
- Release of components for RHOSO 18.0.2
- RHBA-2024:8153
- Control plane Operators for RHOSO 18.02
- RHBA-2024:8154
- Data plane Operators for RHOSO 18.0.2
- RHBA-2024:8155
- Release of components for RHOSO 18.0.2
2.15.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.2 before you deploy the release.
2.15.2.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.2 before you deploy the release.
- Fix for instances created before OpenStack Victoria
In OpenStack Victoria, the instance_numa_topology object was extended to enable mix cpus (pinned and unpinned cpus) in the same instance. Object conversion code was added to handle upgrades but did not account for flavors that have either
hw:mem_page_sizeorhw:numa_nodesset withhw:cpu_policynot set to dedicatedAs a result instances created before the Victoria release could not be started after an upgrade to Victoria.
With this update, non-pinned numa instances can be managed after an FFU from 16.2.
2.15.2.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.2 before you deploy the release.
- Setting
hw-architectureorarchitectureon Image service (glance) image does not work as expected In RHOSO 18.0, the image metadata prefilter is enabled by default. RHOSO does not support emulation of non-native architectures. As part of the introduction of emulation support upstream, the image metadata prefilter was enhanced to support the scheduling of instances based on the declared VM architecture, for example,
hw_architecture=x86_64.When nova was enhanced to support emulating non-native architecture by using image properties, a bug was introduced, because the native architecture was not reported as a trait by the virt driver.
Therefore, by default, support for setting
hw_architectureorarchitectureon an image was rendered inoperable.Workaround: To mitigate this bug, perform one of the following tasks:
-
Unset the
architecture/hw_architectureimage property. RHOSO supports only one architecture, x86_64. There is no valid use case that requires this to be set for an RHOSO cloud, so all hosts will be x86_64. Disable the image metadata prefilter in the
CustomServiceConfigsection of the nova scheduler:[scheduler] image_metadata_prefilter=false
-
Unset the
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis not supported at the moment due to a bug that causes NUMA resource tracking issues, as all disabled CPUs are reported on NUMA node 0 instead of on the correct NUMA node.
- QEMU process failure
A paused instance that uses local storage cannot be live migrated more than once. The second migration causes the QEMU process to crash and nova puts the instance to ERROR state.
Workaround: if feasible, unpause the instance temporarily, then pause it again before the second live migration.
It is not always feasible to unpause an instance. For example, suppose the instance uses a multi-attach cinder volume, and pause is used to limit the access to that volume to a single instance while the other is kept in paused state. In this case, unpausing the instance is not a feasible workaround.
2.15.3. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.2 before you deploy the release.
2.15.3.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.2 before you deploy the release.
- The value for
edpm_kernel_hugepagesis reliably set on the kernel command line Before this update, the value for
edpm_kernel_hugepagescould be missing from the kernel commandline due to an an error in an ansible role that configures it. With this update, this problem is resolved, and no work arounds are required.Jira:OSPRH-10007
2.15.4. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.2 before you deploy the release.
2.15.4.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.2 before you deploy the release.
- Metadata rate-limiting feature
This update fixes a bug that prevented successful use of metadata rate-limiting. Metadata rate limiting is now available.
Jira:OSPRH-9569
2.15.4.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.2 before you deploy the release.
- Router deletion problem and workaround
After an update to RHOSO 18.0.2, you cannot delete a pre-existing router as expected.
The following error is displayed in the CLI:
Internal Server Error: The server has either erred or is incapable of performing the requested operation.Also, the Neutron API logs include the following exception message:
Could not find a service provider that supports distributed=False and ha=FalseWorkaround: Manually create a database register. In a SQL CLI:
$ use ovs_neutron; $ insert into providerresourceassociations (provider_name, resource_id) values ("ovn", "<router_id>");Jira:OSPRH-10537
2.15.5. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.2 before you deploy the release.
2.15.5.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.2 before you deploy the release.
- Do not use virtual functions (VF) for the RHOSO control plane interface
This RHOSO release does not support use of VFs for the RHOSO control plane interface.
2.15.6. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.2 before you deploy the release.
2.15.6.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.2 before you deploy the release.
- OpenStack command output does not account for storage pool changes in the Shared File Systems service (manila)
The
openstack share pool listcommand output does not account for storage pool changes, for example, changes to pool characteristics on back end storage systems, or removal of existing pools from the deployment. Provisioning operations are not affected by this issue. Workaround: Restart the scheduler service to reflect the latest statistics. Perform the restart during scheduled downtime because it causes a minor disruption.
2.16. Release information RHOSO 18.0.1 Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0.1 updates introduced in RHOSO 18.0.1 before you deploy the release.
Review the known issues, bug fixes, and other release notes for this release of Red Hat OpenStack Services on OpenShift.
2.16.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2024:6773
- Release of containers for RHOSO 18.0.1
- RHBA-2024:6774
- Release of components for RHOSO 18.0.1
- RHBA-2024:6775
- Moderate: Red Hat OpenStack Platform 18.0 (python-webob) security update
- RHBA-2024:6776
- Control plane Operators for RHOSO 18.0.1
- RHBA-2024:6777
- Data plane Operators for RHOSO 18.0.1
- RHBA-2024:6778
- Data plane Operators for RHOSO 18.0.1
2.16.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.1 before you deploy the release.
2.16.2.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.1 before you deploy the release.
- Setting
hw-architectureorarchitectureon Image service (glance) image does not work as expected In RHOSO 18.0, the image metadata prefilter is enabled by default. RHOSO does not support emulation of non-native architectures. As part of the introduction of emulation support upstream, the image metadata prefilter was enhanced to support the scheduling of instances based on the declared VM architecture, for example,
hw_architecture=x86_64.When nova was enhanced to support emulating non-native architecture by using image properties, a bug was introduced, because the native architecture was not reported as a trait by the virt driver.
Therefore, by default, support for setting
hw_architectureorarchitectureon an image was rendered inoperable.Workaround: To mitigate this bug, perform one of the following tasks:
-
Unset the
architecture/hw_architectureimage property. RHOSO supports only one architecture, x86_64. There is no valid use case that requires this to be set for an RHOSO cloud, so all hosts will be x86_64. Disable the image metadata prefilter in the
CustomServiceConfigsection of the nova scheduler:[scheduler] image_metadata_prefilter=false
-
Unset the
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following
nova-computeconfiguration:[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default
cpu_power_management_strategycpu_stateis not supported at the moment due to a bug that causes NUMA resource tracking issues, as all disabled CPUs are reported on NUMA node 0 instead of on the correct NUMA node.
- QEMU process failure
A paused instance that uses local storage cannot be live migrated more than once. The second migration causes the QEMU process to crash and nova puts the instance to ERROR state.
Workaround: if feasible, unpause the instance temporarily, then pause it again before the second live migration.
It is not always feasible to unpause an instance. For example, suppose the instance uses a multi-attach cinder volume, and pause is used to limit the access to that volume to a single instance while the other is kept in paused state. In this case, unpausing the instance is not a feasible workaround.
2.16.3. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.1 before you deploy the release.
2.16.3.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.1 before you deploy the release.
- Using the
download-cacheservice no longer prevents Podman from pulling images for data plane deployment Before this bug fix, if you included
download-cacheservice inspec.servicesof theOpenStackDataPlaneNodeSet, a bug prevented Podman from pulling container images that are required by the data plane deployment.With this bug fix, you can include
download-cacheservice inspec.servicesof theOpenStackDataPlaneNodeSetand doing so does not prevent Podman from pulling the required container images.Jira:OSPRH-9500
2.16.3.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.1 before you deploy the release.
- Set
edpm_kernel_argsvariable if you configure the Ansible variableedpm_kernel_hugepages To configure the Ansible variable
edpm_kernel_hugepagesin theansibleVarssection of anOpenStackDataPlaneNodeSetCR, you must also set theedpm_kernel_argsvariable. If you do not need to configureedpm_kernel_argswith a particular value, then set it to an empty string:edpm_kernel_args: ""Jira:OSPRH-10007
2.16.4. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.1 before you deploy the release.
2.16.4.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.1 before you deploy the release.
- Support for security group logging on Compute nodes
With this update, when security group logging is enabled, RHOSO writes logs to the data plane node that hosts the project instance. In the
/var/log/messagesfile, each log entry contains the string,acl_log.
2.16.4.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.1 before you deploy the release.
- Fixed delay between the
oc patchcommand and update of OVN databases Before this update, custom configuration settings applied with the
oc patchcommand did not affect the Networking service (neutron) OVN databases until 10 minutes passed.This update eliminates the delay.
- MAC_Binding aging functionality added back in RHOSO 18.0.1
The MAC_Binding aging functionality that was added in RHOSP 17.1.2 was missing from 18.0 GA. This update to RHOSO 18.0.1 adds it back.
2.16.4.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.1 before you deploy the release.
- Delayed OVN database update after
oc patchcommand Any custom configuration settings applied with the
oc patchcommand do not affect the Networking service OVN databases until 10 minutes have passed.Workaround: After you replace old pods by using the
oc patchcommand, use theoc delete podcommand to delete the new neutron pods.The pod deletion forces a new configuration to be set without the delay issue.
- Metadata rate-limiting feature
Metadata rate-limiting is not available in RHOSO 18.0.1. A fix is in progress.
Jira:OSPRH-9569
2.16.5. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.1 before you deploy the release.
2.16.5.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.1 before you deploy the release.
- DPDK bonds are now validated in os-net-config
Previously, when OVS or DPDK bonds were configured with a single port, no error was reported despite the ovs bridge not being in the right state. With this update
os-net-configreports an error if the bond has a single interface.
2.16.5.2. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.1 before you deploy the release.
- Do not use virtual functions (VF) for the RHOSO control plane interface
This RHOSO release does not support use of VFs for the RHOSO control plane interface.
2.16.6. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.1 before you deploy the release.
2.16.6.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.1 before you deploy the release.
- Image import no longer remains in
importingstate after conversion with ISO image format Before this update, when you used image conversion with the ISO image format, the image import operation remained in an "importing" state.
Now the image import operation does not remain in an "importing" state.
2.17. Release information RHOSO 18.0 GA Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0 GA updates introduced in RHOSO 18.0.0 before you deploy the release.
Review the known issues, bug fixes, and other release notes for this release of Red Hat OpenStack Services on OpenShift.
2.17.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHEA-2024:5245
- Release of components for RHOSO 18.0
- RHEA-2024:5246
- Release of containers for RHOSO 18.0
- RHEA-2024:5247
- Data plane Operators for RHOSO 18.0
- RHEA-2024:5248
- Control plane Operators for RHOSO 18.0
- RHEA-2024:5249
- Release of components for RHOSO 18.0
2.17.2. Observability Copy linkLink copied to clipboard!
Understand the Observability updates introduced in RHOSO 18.0.0 before you deploy the release.
2.17.2.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.0 before you deploy the release.
- Deploy metric storage with Telemetry Operator
The Telemetry Operator now supports deploying and operating Prometheus by using the
cluster-observability-operatorthrough a MonitoringStack resource.
- Expanded interaction with metrics and alarms
You can now use the
openstack metricandopenstack alarmcommands in the OpenStack CLI to interact with metrics and alarms. These commands are useful for troubleshooting.
- Ceilometer uses TCP publisher to expose data for Prometheus
Ceilometer can now use the TCP publisher to publish metric data to sg-core, which exposes them for scraping by Prometheus.
- Prometheus replaces Gnocchi for metrics storage and metrics-based autoscaling
In RHOSO 18.0, Prometheus replaces Gnocchi for metrics and metrics-based autoscaling.
- Compute node log collection
RHOSO uses the Cluster Logging Operator (
cluster-logging-operator) to collect and centrally store logs from OpenStack Compute nodes.
- Graphing dashboards for OpenStack metrics
The Red Hat OpenShift Container Platform (RHOCP) console UI now provides graphing dashboards for OpenStack Metrics.
Jira:OSPRH-824
2.17.3. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO 18.0.0 before you deploy the release.
2.17.3.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.0 before you deploy the release.
- The compute service now supports native Secure RBAC
In RHOSP 17.1 secure role-based access control was implemented using custom policy. In RHOSO-18.0.0 this is implemented using nova native support for SRBAC. As a result all OpenStack deployments support the ADMIN, MEMBER and READER roles by default.
- Setting the hostname of the Compute service (nova) instance by using the Compute service API microversions 2.90 and 2.94
This enhancement enables you to set the hostname of the Compute service (nova) instance by using the Compute service API microversions 2.90 and 2.94 that are now included in the 18.0 release of RHOSO.
API microversion 2.90 enables you to specify an optional hostname when creating, updating, or rebuilding an instance. This is a short name (without periods), and it appears in the metadata available to the guest OS, either through the metadata API or on the configuration drive. If installed and configured in the guest,
cloud-inituses this optional hostname to set the guest hostname.API microversion 2.94 extends microversion 2.90 by enabling you to specify fully qualified domain names (FQDN) wherever you specify the hostname. When using an FQDN as the instance hostname, you must set the
[api]dhcp_domainconfiguration option to the empty string in order for the correct FQDN to appear in the hostname field in the metadata API.
- Manage dedicated CPU power state
You can now configure the
nova-computeservice to manage dedicated CPU power state by setting [libvirt]cpu_power_management to True.This feature requires the Compute service to be set with [compute]cpu_dedicated_set. With that setting, all dedicated CPUs are powered down until they are used by an instance. They are powered up when an instance using them is booted. If power management is configured but [compute]cpu_dedicated_set isn’t set, then the compute service will not start.
By default, the power strategy offlines CPUs when powering down and onlines the CPUs on powering up, but another strategy is possible. Set [libvirt]cpu_power_management_strategy=governor to instead use governors, and use [libvirt]cpu_power_governor_low [libvirt]cpu_power_governor_high to direct which governors to use in online and offline mode (performance and powersave).
- Evacuate to STOPPED with v2.95
Starting with the v2.95 micro version, any evacuated instance will be stopped at the destination. Operators can still continue using the previous behaviour by selecting a microversion below v2.95. Prior to v2.95, if the VM was active prior to the evacuation, it was restored to the active state following a failed evacuation. If the workload encountered I/O corruption as a result of the hypervisor outage, this could potentially make recovery effort harder or cause further issues if the workload was a clustered application that tolerated the failure of a single VM. For this reason, it is considered safer to always evacuate to Stopped and allow the tenant to decide how to recover the VM.
- Compute service hostname change
If you start the Compute service (nova) and your Compute host detects a name change, you must know the reason for the change of the host names. When you resolve the issue, you must restart the Compute service.
- Create a neutron port without an IP address if the port requires only L2 network connectivity
You can now create an instance with a
non-deferredport that has no fixed IP address if the network back end has L2 connectivity.In previous releases of RHOSP, all neutron ports were required to have an IP address. The IP address assignment could be immediate (default) or deferred for L3 routed networks. In RHOSO 18.0, that requirement has been removed. You can now create a neutron port without an IP address if the port requires only L2 network connectivity.
To use this feature, set
ip_allocation = 'none'on the neutron port before passing it to nova to use when creating a VM instance or attaching the port to an existing instance.
- New enlightenments to the libvirt XML for Windows guests in RHOSO 18.0.0
This update adds the following enlightenments to the libvirt XML for Windows guests:
- vpindex
- runtime
- synic
- reset
- frequencies
- tlbflush
ipi
This adds to the list of existing enlightenments:
- relaxed
- vapic
- spinlocks retries
vendor_id spoofing
- New default for managing instances on NUMA nodes
In RHOSP 17.1.4, the default was to pack instances on NUMA nodes.
In RHOSO 18.0, the default has been changed to balance instances across NUMA nodes. To change the default, and pack instances on NUMA nodes, set:
[compute] packing_host_numa_cells_allocation_strategy = Truein both the scheduler and compute node nova.conf
- Rebuild a volume-backed instance with a different image
This update adds the ability to rebuild a volume-backed instance from a different image.
Before this update, you could only rebuild a volume-backed instance from the original image in the boot volume.
Now you can rebuild the instance after you have reimaged the boot volume on the cinder side.
This feature requires API microversion 2.93 or later.
- Archive 'task_log' database records
This enhancement adds the
--task-logoption to thenova-manage db archive_deleted_rowsCLI. When you use the--task-logoption, thetask_logtable records get archived while archiving the database. This option is the default in the nova-operator database purge cron job. Previously, there was no method to delete thetask_logtable without manual database modification.You can use the
--task-logoption with the--beforeoption for records that are older than a specified<date>. Theupdated_atfield is compared to the specifed<date>to determine the age of atask_logrecord for archival.If you configure
nova-computewith[DEFAULT]instance_usage_audit = True, thetask_logdatabase table maintains an audit log of--task-loguse.
- Support for virtual IOMMU device
The Libvirt driver can add a virtual IOMMU device to guests. This capability applies to x86 hosts that use the Q35 machine type. To enable the capability, provide the
hw:viommu_modelextra spec or equivalent image metadata propertyhw_viommu_model.The following values are supported:intel,smmuv3,virtio,auto. The default value isauto, which automatically selectsvirtio.NoteDue to the possible overhead introduced with vIOMMU, enable this capability only for required workloads.
- More options for the
server unshelvecommand With this update, new options are added to the
server unshelvecommand in RHOSO 18.0.0.The
--hostoption allows administrators to specify a destination host. The--no-availability-zoneoption allows administrators to specify the availability zone. Both options require the server to be in theSHELVED_OFFLOADEDstate and the Compute API version to be2.91or greater.
- Support for the
bochslibvirt video model This release adds the ability to use the
bochslibvirt video model. Thebochslibvirt video model is a legacy-free video model that is best suited for UEFI guests. In some cases, it can be usable for BIOS guests, such as when the guest does not depend on direct VGA hardware access.
- Schedule archival and purge of deleted rows from Compute service (nova) cells
The nova-operator now schedules a periodic job for each Compute service (nova) cell to archive and purge the deleted rows from the cell database. The frequency of the job and the age of the database rows to archive and purge can be fine tuned in the
{{OpenStackControlPlane.spec.nova.template.cellTemplates[].dbPurge}}structure for each cell in the cellTemplates.
2.17.3.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.0 before you deploy the release.
- Migrating paused instance no longer generates error messages
Before this update, live migration of a paused instance with live_migration_permit_post_copy=True in nova.conf caused the libvirt driver to erroneously generate error messages similar to [1].
Now the error message is not generated when you live migrate a paused instance with live_migration_permit_post_copy=True.
[1] Error message example: "Live Migration failure: argument unsupported: post-copy migration is not supported with non-live or paused migration: libvirt.libvirtError: argument unsupported: post-copy migration is not supported with non-live or paused migration."
- No network block device (NBD) live migration with TLS enabled
In RHOSO 18.0 Beta, a bug prevents you from using a network block device (NBD) to live migrate storage between Compute nodes with TLS enabled. See https://issues.redhat.com/browse/OSPRH-6931.
This has now been resolved and live migration with TLS enabled is supported with local storage.
- Cannot delete instance when
cpu_power_managmentis set totrue In the RHOSO 18.0.0 beta release, a known issue was discovered preventing the deletion of an instance shortly after it was created if power management was enabled.
This has now been fixed in the RHOSO 18.0.0 GA release
Jira:OSPRH-7103
2.17.3.3. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.0 before you deploy the release.
For information on the scope of support for Technology Preview features, see Example.
- Technology preview of PCI device tracking in Placement service
RHOSO 18.0.0 introduces a technology preview of the ability to track PCI devices in the OpenStack Placement service.
Tracking PCI devices in the Placement service enables you to use granular quotas on PCI devices when combined with the Unified Limits Technology Preview.
PCI tracking in the Placement service is disabled by default and is limited to flavor-based PCI passthrough. Support for the Networking service (neutron) SRIOV ports is not implemented, but is required before this feature is fully supported.
- Use of Identity service (Keystone) unified limits in the Compute service (nova)
This RHOSO release supports Identity service unified limits in the Compute service. Unified limits centralize management of resource quota limits in the Identity service (Keystone) and enable flexibility for users to manage quota limits for any Compute service resource being tracked in the Placement service.
2.17.3.4. Removed functionality Copy linkLink copied to clipboard!
Understand the removed functionality introduced in RHOSO 18.0.0 before you deploy the release.
Removed functionality is no longer supported in this product and is not recommended for new deployments.
- Keypair generation removed from RHOSO 18
Keypair generation was deprecated in RHOSP 17 and has been removed from RHOSO 18. Now you need to precreate the keypair by the SSH command line tool
ssh-keygenand then pass the public key to the nova API.
- i440fx PC machine type no longer tested or supported
In RHOSP 17, the i440fx PC machine type, pc-i440fx, was deprecated and Q35 became the default machine type for x86_64.
In RHOSP 18, the i440fx PC machine type is no longer tested or supported.
The i440fx PC machine type is still available for use under a support exception for legacy applications that cannot function with the Q35 machine type. If you have such a workload, contact Red Hat support to request a support exception.
With the removal of support for the i440fx PC machine type from RHOSP, you cannot use pc-i440fx to certify VNFs or third-party integrations. You must use the Q35 machine type.
Jira:OSPRH-7373
- Unsupported: vDPA and Hardware offload OVS are unsupported
Hardware offload OVS consists of processing network traffic in hardware with the kernel swtichdev and tcflower protocols.
vDPA extends Hardware offload OVS by providing a vendor-neutral virtio net interface to the guest, decoupling the workload from the specifics fo the host hardware instead of presenting a vendor-specific virtual function.
Both Hardware offload OVS and vDPA are unsupported in RHOSO 18.0 with no upgrade path available for existing users.
At this time there is no plan to reintroduce this functionality or continue to invest in new features related to vdpa or hardware offloaded ovs.
If you have a business requirement for these removed features, please reach out to Red Hat support or your partner and Technical Account Manager so that Red Hat can reassess the demand for these features for a future RHOSO release.
Jira:OSPRH-7829
2.17.3.5. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.0 before you deploy the release.
- Setting
hw-architectureorarchitectureon Image service (glance) image does not work as expected In RHOSO 18.0, the image metadata prefilter is enabled by default. RHOSO does not support emulation of non-native architectures. As part of the introduction of emulation support upstream, the image metadata prefilter was enhanced to support the scheduling of instances based on the declared VM architecture, for example
hw_architecture=x86_64.When nova was enhanced to support emulating non-native architecture via image properties, a bug was introduced, because the native architecture was not reported as a trait by the virt driver.
Therefore, by default, support for setting
hw_architectureorarchitectureon an image was rendered inoperable.To mitigate this bug, you have two choices:
-
Unset the
architecture/hw_architectureimage property. RHOSO supports only one architecture, x86_64. There is no valid use case that requires this to be set for an RHOSO cloud, so all hosts will be x86_64. Disable the image metadata prefilter in the
CustomServiceConfigsection of the nova scheduler:[scheduler] image_metadata_prefilter=false
-
Unset the
- QEMU process failure
A paused instance that uses local storage cannot be live migrated more than once. The second migration causes the QEMU process to crash and nova puts the instance to ERROR state.
Workaround: if feasible, unpause the instance temporarily then pause it again before the second live migration.
It is not always feasible to unpause an instance. For example, suppose the instance uses a multi-attach cinder volume, and pause is used to limit the access to that volume to a single instance while the other is kept in paused state. In this case, unpausing the instance is not a feasible workaround.
- Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following nova-compute configuration.
[libvirt] cpu_power_management = true cpu_power_management_strategy = governorThe default cpu_power_management_strategy cpu_state is not supported at the moment due to a bug that causes NUMA resource tracking issues, because all disabled CPUs are reported on NUMA node 0 instead of on the correct NUMA node.
2.17.4. Data plane Copy linkLink copied to clipboard!
Understand the Data plane updates introduced in RHOSO 18.0.0 before you deploy the release.
2.17.4.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.0 before you deploy the release.
- Using the
download-cacheservice prevents Podman from pulling images for data plane deployment Do not list the
download-cacheservice in spec.services of theOpenStackDataPlaneNodeSet. If you listdownload-cacheinOpenStackDataPlaneNodeSet, Podman can not pull the container images required by the data plane deployment.Workaround: Omit the
download-cacheservice from the default services list inOpenStackDataPlaneNodeSet.Jira:OSPRH-9500
2.17.5. Hardware Provisioning Copy linkLink copied to clipboard!
Understand the Hardware Provisioning updates introduced in RHOSO 18.0.0 before you deploy the release.
2.17.5.1. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.0 before you deploy the release.
- Increased EFI partition size
Before RHOSP 17.1.4, the EFI partition size of an overcloud node was 16MB. With this update, the image used for provisioned EDPM nodes now has an EFI partition size of 200MB to align with RHEL and to accommodate firmware upgrades.
2.17.6. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO 18.0.0 before you deploy the release.
2.17.6.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.0 before you deploy the release.
- Octavia Operator availability zones
The Octavia Management network created and managed by the Octavia operator requires that the OpenStack routers and networks are scheduled on the OVN controller on the OpenShift worker nodes.
If the OpenStack Networking Service (neutron) is configured with non-default availability zones, the OVN controller pod on the OpenShift worker and Octavia must be configured with the same availability zone.
- Example
ovn: template: ovnController: external-ids: availability-zones: - zone1 octavia: template: lbMgmtNetwork: availabilityZones: zone1
2.17.6.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.0 before you deploy the release.
- OVN pod no longer goes into loop due to NIC Mapping
When using a large number of NIC mappings, OVN could go into a creation loop. This is now fixed
Jira:OSPRH-7480
2.17.6.3. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.0 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- QoS minimum bandwidth policy (technology preview)
In RHOSO 18.0.0, a technology preview is available for the Networking service (neutron) for QoS minimum bandwidth for placement reporting and scheduling.
- Load-balancing service (Octavia) support of multiple VIP addresses
This update adds a technology preview of support for multiple VIP addresses allocated from the same Neutron network for the Load-balancing service.
You can now specify additional subnet_id/ip_address pairs for the same VIP port. This makes it possible to configure the Load-balancing service with both IPv4 and IPv6 exposed to both public and private subnets.
2.17.6.4. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.0 before you deploy the release.
- Delayed OVN database update after
oc patchcommand Any custom configuration settings applied with 'oc patch …' command do not affect neutron ovn databases until 10 minutes have passed.
Workaround: After you replace old pods using the
oc patch …command, delete the new neutron pod(s) manually usingoc delete pod …command.The pod deletion forces a new configuration to be set without the delay issue.
- MAC_Binding aging functionality missing in RHOSO 18.0.0
The MAC_Binding aging functionality that was added in RHOSP 17.1.2 is missing from 18.0 GA. A fix is in progress.
- 10-minute delay between 'oc patch`command and update of OVN databases
Custom configuration settings applied with the 'oc patch' command do not affect the Networking service (neutron) OVN databases until 10 minutes have passed.
Workaround: After the old Networking service pods are replaced new pods after an 'oc patch' command operation, delete the new Networking service pods manually using the 'oc delete pod' command.
This deletion forces a new configuration to be set without the delay issue.
- Metadata rate-limiting feature
Metadata rate-limiting is not available in RHOSO 18.0.0. A fix is in progress.
Jira:OSPRH-9569
2.17.7. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO 18.0.0 before you deploy the release.
2.17.7.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.0 before you deploy the release.
- AMD CPU powersave profiles
A power save profile, cpu-partitioning-powersave, was introduced in Red Hat Enterprise Linux 9 (RHEL 9), and made available in Red Hat OpenStack Platform (RHOSP) 17.1.3.
This TuneD profile is the base building block for saving power in NFV environments. RHOSO 18.0 adds cpu-partitioning-powersave support for AMD CPUs.
Jira:OSPRH-2268
2.17.7.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.0 before you deploy the release.
- Physical function (PF) MAC address now matches between VM instances and SR-IOV physical functions (PFs)
This update fixes a bug that caused a PF MAC address mismatch between VM instances and SR-IOV PFs (Networking service ports with
vnic-typeset todirect-physical).In the RHOSO 18.0 Beta release, a bug in the Compute service (nova) prevented the MAC address of SR-IOV PFs from being updated correctly when attached to a VM instance.
Now the MAC address of the PF is set on the corresponding neutron port.
2.17.7.3. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.0 before you deploy the release.
For information on the scope of support for Technology Preview features, see Example.
- In RHOSO 18.0, a technology preview is available for the nmstate provider back-end in
os-net-config This technology preview of nmstate and NIC hardware offload has known issues that make it unsuitable for production use. For production, use the
openstack-network-scriptspackage rather than nmstate and NetworkManager.There is a production-ready native nmstate mode you can select during installation, but network configuration, which must be provided in nmstate format, is not backwards-compatible with templates from TripleO. It also lacks certain features that os-net-config provides, such as NIC name mapping or DSCP configuration.
- Data Center Bridge (DCB)-based QoS settings technology preview
Specific to port/interface, DCB-based QoS settings are now available as a technology preview as part of the
os-net-configtool’s network configuration template. For more information, see this knowledge base article: https://access.redhat.com/articles/7062865Jira:OSPRH-2889
2.17.7.4. Deprecated functionality Copy linkLink copied to clipboard!
Understand the deprecated functionality introduced in RHOSO 18.0.0 before you deploy the release.
Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments.
- TimeMaster service is deprecated in RHOSO 18.0
In RHOSO 18.0, support for the TimeMaster service is deprecated. Bug fixes and support are provided through the end of the RHOSO 18.0 lifecycle, but no new feature enhancements will be made.
Jira:OSPRH-8244
2.17.7.5. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.0 before you deploy the release.
- Do not use virtual functions (VF) for the RHOSO control plane interface
This RHOSO release does not support use of VFs for the RHOSO control plane interface.
- Bonds require minimum of two interfaces
If you configure an OVS or DPDK bond, always configure at least two interfaces. Bonds with only a single interface do not function as expected.
2.17.8. High availability Copy linkLink copied to clipboard!
Understand the High availability updates introduced in RHOSO 18.0.0 before you deploy the release.
2.17.8.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.0 before you deploy the release.
- Password rotation
This update introduces the ability to generate and rotate OpenStack database passwords.
2.17.9. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO 18.0.0 before you deploy the release.
2.17.9.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.0 before you deploy the release.
- Shared File Systems support for scalable CephFS-NFS
The Shared File Systems service (manila) now supports a scalable CephFS-NFS service. In earlier releases of Red Hat OpenStack Platform, only active/passive high-availability that was orchestrated with Director, using Pacemaker/Corosync, was supported. With this release, deployers can create active/active clusters of CephFS-NFS and integrate these clusters with the Shared File Systems service for improved scalability and high availability for NFS workloads.
- Block Storage service (cinder) volume deletion
With this release, the Block Storage service RBD driver takes advantage of recent Ceph developments to allow RBD volumes to meet normal volume deletion expectations.
In previous releases, when the Block Storage service used an RBD (Ceph) volume back end, it was not always possible to delete a volume.
project_idin API URLs now optionalYou are no longer required to include
project_idin Block Storage service (cinder) API URLs.
- Dell PowerStore storage systems driver
A new share driver has been added to support Dell PowerStore storage systems with the Shared File Systems service (Manila) service.
Jira:OSPRH-4425
- Dell PowerFlex storage systems driver
A new share driver has been added to support Dell PowerFlex storage systems with the Shared File Systems service (Manila) service.
Jira:OSPRH-4426
- openstack-must-gather SOS report support
You can now collect diagnostic information about your RHOSO deployment using the openstack-must-gather.
You can retrieve SOS reports for both the RHOCP control plane and RHOSO data plane nodes using a single command, and options are available to dump specific information related to a particular deployed service.
2.17.9.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.0 before you deploy the release.
- Key Manager service configuration fix enables Image service image signing and verification
With this fix, the Image service (glance) is automatically configured to interact with the Key Manager service (barbican), and you can now perform encrypted image signing and verification.
- Fixed faulty share creation in the NetApp ONTAP driver when using SVM scoped accounts
Due to a faulty kerberos enablement check upon shares creation, the NetApp ONTAP driver failed to create shares when configured with SVM scoped accounts. A fix has been committed to openstack-manila and shares creation should work smoothly.
Jira:OSPRH-8044
2.17.9.3. Technology Previews Copy linkLink copied to clipboard!
Understand the Technology Previews updates introduced in RHOSO 18.0.0 before you deploy the release.
For information on the scope of support for Technology Preview features, see Technology Preview Features - Scope of Support.
- Deployment and scale of Object Storage service
This feature allows for the deployment and scale of Object Storage service (swift) data on data plane nodes. This release of the feature is a technology preview.
Jira:OSPRH-1307
2.17.9.4. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO 18.0.0 before you deploy the release.
- RGW does not pass certain Tempest object storage metadata tests
Red Hat OpenStack Services on OpenShift 18.0 supports Red Hat Ceph Storage 7. Red Hat Ceph Storage 7 RGW does not pass certain Tempest object storage metadata tests as tracked by the following Jiras:
https://issues.redhat.com/browse/RHCEPH-6708https://issues.redhat.com/browse/RHCEPH-9119https://issues.redhat.com/browse/RHCEPH-9122https://issues.redhat.com/browse/RHCEPH-4654
Jira:OSPRH-7464
- Image import remains in
importingstate after conversion with ISO image format When you use image conversion with the ISO image format, the image import operation remains in an "importing" state.
*Workaround:* If your deployment supports uploading images in ISO format, you can use the `image-create` command to upload ISO images as shown in the following example (instead of using image conversion with the `image-create-via-import` command).- Example
glance image-create \ --name <iso_image> \ --disk-format iso \ --container-format bare \ --file <my_file.iso>-
Replace
<iso_image>with the name of your image. Replace
<my_file.iso>with the file name for your image.
-
Replace
2.17.10. Dashboard Copy linkLink copied to clipboard!
Understand the Dashboard updates introduced in RHOSO 18.0.0 before you deploy the release.
2.17.10.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO 18.0.0 before you deploy the release.
- Hypervisor status now includes vCPU and pCPU information
Before this update, pCPU usage was excluded from the hypervisor status in the Dashboard service (horizon) even if the
cpu_dedicated_setconfiguration option was set in thenova.conffile. This enhancement uses the Placement API to display information about vCPUs and pCPUs. You can view vCPU and pCPU usage diagrams under the Resource Providers Summary and find more information on vCPUs and pCPUs on the new Resource provider tab at the Hypervisors panel.
- With this update, you can now customize the OpenStack Dashboard (horizon) container
The customization can be performed by using the extra mounts feature to add or change files inside of the Dashboard container.
- TLS everywhere in RHOSO Dashboard Operator
With this update, the RHOSO Dashboard (horizon) Operator automatically configures TLS-related configuration settings.
These settings include certificates and response headers when appropriate, including the secure cookies and HSTS headers for serving over HTTPS.
2.17.10.2. Bug fixes Copy linkLink copied to clipboard!
Understand the bug fixes introduced in RHOSO 18.0.0 before you deploy the release.
- Host spoofing protective measure
Before this update, the hosts configuration option was not populated with the minimum hosts necessary to protect against host spoofing.
With this update, the hosts configuration option is now correctly populated.
- Dashboard service operators now include HSTS header
Before this update, HSTS was only enabled in Django through the Dashboard service (horizon) application. However, user HTTPS sessions were going through the OpenShift route, where HSTS was disabled. With this update, HSTS is enabled on the OpenShift route.
2.18. Release information RHOSO 18.0 Beta Copy linkLink copied to clipboard!
Understand the Release information RHOSO 18.0 Beta updates introduced in RHOSO beta before you deploy the release.
2.18.1. Advisory list Copy linkLink copied to clipboard!
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHEA-2024:3646
- RHOSO 18.0 Beta container images, data plane 1.0 Beta
- RHEA-2024:3647
- RHOSO 18.0 Beta container images, control plane 1.0 Beta
- RHEA-2024:3648
- RHOSO 18.0 Beta service container images
- RHEA-2024:3649
- RHOSO 18.0 Beta packages
2.18.2. Compute Copy linkLink copied to clipboard!
Understand the Compute updates introduced in RHOSO beta before you deploy the release.
2.18.2.1. New features Copy linkLink copied to clipboard!
Understand the new features introduced in RHOSO beta before you deploy the release.
- You can schedule archival and purge of deleted rows from Compute service (nova) cells
The nova-operator now schedules a periodic job for each Compute service (nova) cell to archive and purge the deleted rows from the cell database. The frequency of the job and the age of the database rows to archive and purge can be fine tuned in the
{{OpenStackControlPlane.spec.nova.template.cellTemplates[].dbPurge}}structure for each cell in the cellTemplates.
2.18.2.2. Deprecated functionality Copy linkLink copied to clipboard!
Understand the deprecated functionality introduced in RHOSO beta before you deploy the release.
Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments.
- i440fx PC machine type no longer tested or supported
In RHOSP 17, the i440fx PC machine type, pc-i440fx, was deprecated and Q35 became the default machine type for x86_64.
In RHOSP 18, the i440fx PC machine type is no longer tested or supported.
The i440fx PC machine type is still available for use under a support exception for legacy applications that cannot function with the Q35 machine type. If you have such a workload, contact Red Hat support to request a support exception.
With the removal of support for the i440fx PC machine type from RHOSP, you cannot use pc-i440fx to certify VNFs or third-party integrations. You must use the Q35 machine type.
Jira:OSPRH-7373
2.18.2.3. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO beta before you deploy the release.
- No network block device (NBD) live migration with TLS enabled
In RHOSO 18.0 Beta, a bug prevents you from using a network block device (NBD) to live migrate storage between Compute nodes with TLS enabled. See https://issues.redhat.com/browse/OSPRH-6931.
This issue only affects storage migration when TLS is enabled. You can live migrate storage with TLS not enabled.
- Do not mix NUMA and non-NUMA instances on same Compute host
Instances without a NUMA topology should not coexist with NUMA instances on the same host.
- Cannot delete instance when
cpu_power_managmentis set totrue When an instance is first started and the host core state is changed there is a short time period where it cannot be updated again. During this period instance deletion can fail. if this happens a second delete attempt should succeed after a short delay of a few seconds.
Jira:OSPRH-7103
2.18.3. Networking Copy linkLink copied to clipboard!
Understand the Networking updates introduced in RHOSO beta before you deploy the release.
2.18.3.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO beta before you deploy the release.
- OVN pod goes into loop due to NIC Mapping
When using a large number of NIC mappings, OVN might go into a creation loop.
Jira:OSPRH-7480
2.18.4. Network Functions Virtualization Copy linkLink copied to clipboard!
Understand the Network Functions Virtualization updates introduced in RHOSO beta before you deploy the release.
2.18.4.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO beta before you deploy the release.
- Listing physical function (PF) ports using neutron might show the wrong MAC
Lists of PF ports might show the wrong MAC.
2.18.5. Storage Copy linkLink copied to clipboard!
Understand the Storage updates introduced in RHOSO beta before you deploy the release.
2.18.5.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO beta before you deploy the release.
- Image uploads might fail if a multipathing path for Block Storage service (cinder) volumes is offline
If you use multipath for Block storage service volumes, and you have configured the Block Storage service as the back end for the Image service (glance), image uploads might fail if one of the paths goes offline.
- RGW does not pass certain Tempest object storage metadata tests
Red Hat OpenStack Services on OpenShift 18.0 supports Red Hat Ceph Storage 7. Red Hat Ceph Storage 7 RGW does not pass certain Tempest object storage metadata tests as tracked by the following Jiras:
https://issues.redhat.com/browse/RHCEPH-6708https://issues.redhat.com/browse/RHCEPH-9119https://issues.redhat.com/browse/RHCEPH-9122https://issues.redhat.com/browse/RHCEPH-4654
Jira:OSPRH-7464
- Missing Barbican configuration in the Image service (glance)
The Image service is not automatically configured to interact with Key Manager (barbican), and encrypted image signing and verification fails due to the missing configuration.
Jira:OSPRH-7155
2.18.6. Release delivery Copy linkLink copied to clipboard!
Understand the Release delivery updates introduced in RHOSO beta before you deploy the release.
2.18.6.1. Removed functionality Copy linkLink copied to clipboard!
Understand the removed functionality introduced in RHOSO beta before you deploy the release.
Removed functionality is no longer supported in this product and is not recommended for new deployments.
- Removal of
snmpandsnmpd The
snmpservice andsnmpddaemon are removed in RHOSO 18.0.
2.18.7. Integration test suite Copy linkLink copied to clipboard!
Understand the Integration test suite updates introduced in RHOSO beta before you deploy the release.
2.18.7.1. Known issues Copy linkLink copied to clipboard!
Understand the known issues introduced in RHOSO beta before you deploy the release.
- Tempest test-operator does not work with LVMS storage class
When the test-operator is used to run Tempest, it requests a “ReadWriteMany” PersistentVolumeClaim (PVC) which the LVMS storage class does not support. This causes the tempest-test pod to become stuck in the
pendingstate.Workaround: Use the test-operator with a storage class supporting
ReadWriteManyPVCs. The test-operator should work with aReadWriteOncePVC so the fixed version will no longer request aReadWriteManyPVC.