Chapter 3. Release information RHOSO 18.0
These release notes highlight selected updates in some or all of the Red Hat Services on OpenShift (RHOSO) components. Consider these updates when you deploy this release of RHOSO. Each of the notes in this section refers to the Jira issue used to track the update. If the Jira issue security level is public, you can click the link to see the Jira issue. If the security level is restricted, the Jira issue ID does not have a link to the Jira issue.
3.1. Release information RHOSO 18.0.2
3.1.1. Advisory list
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2024:8151
- Release of containers for RHOSO 18.0.2
- RHBA-2024:8152
- Release of components for RHOSO 18.0.2
- RHBA-2024:8153
- Control plane Operators for RHOSO 18.02
- RHBA-2024:8154
- Data plane Operators for RHOSO 18.0.2
- RHBA-2024:8155
- Release of components for RHOSO 18.0.2
3.1.2. Compute
3.1.2.1. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Fix for instances created before OpenStack Victoria
In OpenStack Victoria, the instance_numa_topology object was extended to enable mix cpus (pinned and unpinned cpus) in the same instance. Object conversion code was added to handle upgrades but did not account for flavors that have either hw:mem_page_size
or hw:numa_nodes
set with hw:cpu_policy
not set to dedicated
As a result instances created before the victoria release could not be started after an upgrade to victoria.
With this update, non-pinned numa instances can be managed after an FFU from 16.2.
3.1.2.2. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Setting hw-architecture
or architecture
on Image service (glance) image does not work as expected
In RHOSO 18.0, the image metadata prefilter is enabled by default. RHOSO does not support emulation of non-native architectures. As part of the introduction of emulation support upstream, the image metadata prefilter was enhanced to support the scheduling of instances based on the declared VM architecture, for example, hw_architecture=x86_64
.
When nova was enhanced to support emulating non-native architecture by using image properties, a bug was introduced, because the native architecture was not reported as a trait by the virt driver.
Therefore, by default, support for setting hw_architecture
or architecture
on an image was rendered inoperable.
Workaround: To mitigate this bug, perform one of the following tasks:
-
Unset the
architecture
/hw_architecture
image property. RHOSO supports only one architecture, x86_64. There is no valid use case that requires this to be set for an RHOSO cloud, so all hosts will be x86_64. Disable the image metadata prefilter in the
CustomServiceConfig
section of the nova scheduler:[scheduler] image_metadata_prefilter=false
Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following nova-compute
configuration:
[libvirt] cpu_power_management = true cpu_power_management_strategy = governor
The default cpu_power_management_strategy
cpu_state
is not supported at the moment due to a bug that causes NUMA resource tracking issues, as all disabled CPUs are reported on NUMA node 0 instead of on the correct NUMA node.
QEMU process failure
A paused instance that uses local storage cannot be live migrated more than once. The second migration causes the QEMU process to crash and nova puts the instance to ERROR state.
Workaround: if feasible, unpause the instance temporarily, then pause it again before the second live migration.
It is not always feasible to unpause an instance. For example, suppose the instance uses a multi-attach cinder volume, and pause is used to limit the access to that volume to a single instance while the other is kept in paused state. In this case, unpausing the instance is not a feasible workaround.
3.1.3. Data plane
3.1.3.1. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
The value for edpm_kernel_hugepages
is reliably set on the kernel command line.
Before this update, the value for edpm_kernel_hugepages
could be missing from the kernel commandline due to an an error in an ansible role that configures it. With this update, this problem is resolved, and no work arounds are required.
Jira:OSPRH-10007
3.1.4. Networking
3.1.4.1. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Metadata rate-limiting feature
This update fixes a bug that prevented successful use of metadata rate-limiting. Metadata rate limiting is now available.
Jira:OSPRH-9569
3.1.4.2. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Router deletion problem and workaround
After an update to RHOSO 18.0.2, you cannot delete a pre-existing router as expected.
The following error is displayed in the CLI:
Internal Server Error: The server has either erred or is incapable of performing the requested operation.
Also, the Neutron API logs include the following exception message:
Could not find a service provider that supports distributed=False and ha=False
Workaround: Manually create a database register. In a SQL CLI:
$ use ovs_neutron; $ insert into providerresourceassociations (provider_name, resource_id) values ("ovn", "<router_id>");
Jira:OSPRH-10537
3.1.5. Network Functions Virtualization
3.1.5.1. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Do not use virtual functions (VF) for the RHOSO control plane interface
This RHOSO release does not support use of VFs for the RHOSO control plane interface.
3.1.6. Storage
3.1.6.1. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
OpenStack command output does not account for storage pool changes in the Shared File Systems service (manila)
The openstack share pool list
command output does not account for storage pool changes, for example, changes to pool characteristics on back end storage systems, or removal of existing pools from the deployment. Provisioning operations are not affected by this issue. Workaround: Restart the scheduler service to reflect the latest statistics. Perform the restart during scheduled downtime because it causes a minor disruption.
3.2. Release information RHOSO 18.0.1
3.2.1. Advisory list
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHBA-2024:6773
- Release of components for RHOSO 18.0.1
- RHBA-2024:6774
- Release of containers for RHOSO 18.0.1
- RHBA-2024:6775
- Moderate: Red Hat OpenStack Platform 18.0 (python-webob) security update
- RHBA-2024:6776
- Control plane Operators for RHOSO 18.0.1
- RHBA-2024:6777
- Data plane Operators for RHOSO 18.0.1
- RHBA-2024:6778
- Data plane Operators for RHOSO 18.0.1
3.2.2. Compute
3.2.2.1. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Setting hw-architecture
or architecture
on Image service (glance) image does not work as expected
In RHOSO 18.0, the image metadata prefilter is enabled by default. RHOSO does not support emulation of non-native architectures. As part of the introduction of emulation support upstream, the image metadata prefilter was enhanced to support the scheduling of instances based on the declared VM architecture, for example, hw_architecture=x86_64
.
When nova was enhanced to support emulating non-native architecture by using image properties, a bug was introduced, because the native architecture was not reported as a trait by the virt driver.
Therefore, by default, support for setting hw_architecture
or architecture
on an image was rendered inoperable.
Workaround: To mitigate this bug, perform one of the following tasks:
-
Unset the
architecture
/hw_architecture
image property. RHOSO supports only one architecture, x86_64. There is no valid use case that requires this to be set for an RHOSO cloud, so all hosts will be x86_64. Disable the image metadata prefilter in the
CustomServiceConfig
section of the nova scheduler:[scheduler] image_metadata_prefilter=false
Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following nova-compute
configuration:
[libvirt] cpu_power_management = true cpu_power_management_strategy = governor
The default cpu_power_management_strategy
cpu_state
is not supported at the moment due to a bug that causes NUMA resource tracking issues, as all disabled CPUs are reported on NUMA node 0 instead of on the correct NUMA node.
QEMU process failure
A paused instance that uses local storage cannot be live migrated more than once. The second migration causes the QEMU process to crash and nova puts the instance to ERROR state.
Workaround: if feasible, unpause the instance temporarily, then pause it again before the second live migration.
It is not always feasible to unpause an instance. For example, suppose the instance uses a multi-attach cinder volume, and pause is used to limit the access to that volume to a single instance while the other is kept in paused state. In this case, unpausing the instance is not a feasible workaround.
3.2.3. Data plane
3.2.3.1. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Using the download-cache
service no longer prevents Podman from pulling images for data plane deployment
Before this bug fix, if you included download-cache
service in spec.services
of the OpenStackDataPlaneNodeSet
, a bug prevented Podman from pulling container images that are required by the data plane deployment.
With this bug fix, you can include download-cache
service in spec.services
of the OpenStackDataPlaneNodeSet
and doing so does not prevent Podman from pulling the required container images.
Jira:OSPRH-9500
3.2.3.2. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Set edpm_kernel_args
variable if you configure the Ansible variable edpm_kernel_hugepages
To configure the Ansible variable edpm_kernel_hugepages
in the ansibleVars
section of an OpenStackDataPlaneNodeSet
CR, you must also set the edpm_kernel_args
variable. If you do not need to configure edpm_kernel_args
with a particular value, then set it to an empty string:
edpm_kernel_args: ""
Jira:OSPRH-10007
3.2.4. Networking
3.2.4.1. New features
This part describes new features and major enhancements introduced in Red Hat OpenStack Services on OpenShift 18.0.
Support for security group logging on Compute nodes
With this update, when security group logging is enabled, RHOSO writes logs to the data plane node that hosts the project instance. In the /var/log/messages
file, each log entry contains the string, acl_log
.
3.2.4.2. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Fixed delay between the oc patch
command and update of OVN databases
Before this update, custom configuration settings applied with the oc patch
command did not affect the Networking service (neutron) OVN databases until 10 minutes passed.
This update eliminates the delay.
MAC_Binding aging functionality added back in RHOSO 18.0.1
The MAC_Binding aging functionality that was added in OSP 17.1.2 was missing from 18.0 GA. This update to RHOSO 18.0.1 adds it back.
3.2.4.3. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Delayed OVN database update after oc patch
command
Any custom configuration settings applied with the oc patch
command do not affect the Networking service OVN databases until 10 minutes have passed.
Workaround: After you replace old pods by using the oc patch
command, use the oc delete pod
command to delete the new neutron pods.
The pod deletion forces a new configuration to be set without the delay issue.
Metadata rate-limiting feature
Metadata rate-limiting is not available in RHOSO 18.0.1. A fix is in progress.
Jira:OSPRH-9569
3.2.5. Network Functions Virtualization
3.2.5.1. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
DPDK bonds are now validated in os-net-config
Previously, when OVS or DPDK bonds were configured with a single port, no error was reported despite the ovs bridge not being in the right state. With this update os-net-config
reports an error if the bond has a single interface.
3.2.5.2. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Do not use virtual functions (VF) for the RHOSO control plane interface
This RHOSO release does not support use of VFs for the RHOSO control plane interface.
3.2.6. Storage
3.2.6.1. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Image import no longer remains in importing
state after conversion with ISO image format
Before this update, when you used image conversion with the ISO image format, the image import operation remained in an "importing" state.
Now the image import operation does not remain in an "importing" state.
3.3. Release information RHOSO 18.0 GA
3.3.1. Advisory list
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHEA-2024:5245
- Release of components for RHOSO 18.0
- RHEA-2024:5246
- Release of containers for RHOSO 18.0
- RHEA-2024:5247
- Data plane Operators for RHOSO 18.0
- RHEA-2024:5248
- Control plane Operators for RHOSO 18.0
- RHEA-2024:5249
- Release of components for RHOSO 18.0
3.3.2. Observability
3.3.2.1. New features
This part describes new features and major enhancements introduced in Red Hat OpenStack Services on OpenShift 18.0.
Deploy metric storage with Telemetry Operator
The Telemetry Operator now supports deploying and operating Prometheus by using the cluster-observability-operator
through a MonitoringStack resource.
Expanded interaction with metrics and alarms
You can now use the openstack metric
and openstack alarm
commands in the OpenStack CLI to interact with metrics and alarms. These commands are useful for troubleshooting.
Ceilometer uses TCP publisher to expose data for Prometheus
Ceilometer can now use the TCP publisher to publish metric data to sg-core, which exposes them for scraping by Prometheus.
Prometheus replaces Gnocchi for metrics storage and metrics-based autoscaling
In RHOSO 18.0, Prometheus replaces Gnocchi for metrics and metrics-based autoscaling.
Compute node log collection
RHOSO uses the Cluster Logging Operator (cluster-logging-operator
) to collect and centrally store logs from OpenStack Compute nodes.
Graphing dashboards for OpenStack metrics
The Red Hat OpenShift Container Platform (RHOCP) console UI now provides graphing dashboards for OpenStack Metrics.
Jira:OSPRH-824
3.3.3. Compute
3.3.3.1. New features
This part describes new features and major enhancements introduced in Red Hat OpenStack Services on OpenShift 18.0.
The compute service now supports native Secure RBAC
In osp 17.1 secure role-based access control was implemented using custom policy. In RHOSO-18.0.0 this is implemented using nova native support for SRBAC. As a result all OpenStack deployments support the ADMIN, MEMBER and READER roles by default.
Setting the hostname of the Compute service (nova) instance by using the Compute service API microversions 2.90 and 2.94
This enhancement enables you to set the hostname of the Compute service (nova) instance by using the Compute service API microversions 2.90 and 2.94 that are now included in the 18.0 release of RHOSO.
API microversion 2.90 enables you to specify an optional hostname when creating, updating, or rebuilding an instance. This is a short name (without periods), and it appears in the metadata available to the guest OS, either through the metadata API or on the configuration drive. If installed and configured in the guest, cloud-init
uses this optional hostname to set the guest hostname.
API microversion 2.94 extends microversion 2.90 by enabling you to specify fully qualified domain names (FQDN) wherever you specify the hostname. When using an FQDN as the instance hostname, you must set the [api]dhcp_domain
configuration option to the empty string in order for the correct FQDN to appear in the hostname field in the metadata API.
Manage dedicated CPU power state
You can now configure the nova-compute
service to manage dedicated CPU power state by setting [libvirt]cpu_power_management to True.
This feature requires the Compute service to be set with [compute]cpu_dedicated_set. With that setting, all dedicated CPUs are powered down until they are used by an instance. They are powered up when an instance using them is booted. If power management is configured but [compute]cpu_dedicated_set isn’t set, then the compute service will not start.
By default, the power strategy offlines CPUs when powering down and onlines the CPUs on powering up, but another strategy is possible. Set [libvirt]cpu_power_management_strategy=governor to instead use governors, and use [libvirt]cpu_power_governor_low [libvirt]cpu_power_governor_high to direct which governors to use in online and offline mode (performance and powersave).
Evacuate to STOPPED with v2.95
Starting with the v2.95 micro version, any evacuated instance will be stopped at the destination. Operators can still continue using the previous behaviour by selecting a microversion below v2.95. Prior to v2.95, if the VM was active prior to the evacuation, it was restored to the active state following a failed evacuation. If the workload encountered I/O corruption as a result of the hypervisor outage, this could potentially make recovery effort harder or cause further issues if the workload was a clustered application that tolerated the failure of a single VM. For this reason, it is considered safer to always evacuate to Stopped and allow the tenant to decide how to recover the VM.
Compute service hostname change
If you start the Compute service (nova) and your Compute host detects a name change, you must know the reason for the change of the host names. When you resolve the issue, you must restart the Compute service.
Create a neutron port without an IP address if the port requires only L2 network connectivity
You can now create an instance with a non-deferred
port that has no fixed IP address if the network back end has L2 connectivity.
In previous releases of RHOSP, all neutron ports were required to have a IP address. The IP address assignment could be immediate (default) or deferred for L3 routed networks. In RHOSO 18.0, that requirement has been removed. You can now create a neutron port without an IP address if the port requires only L2 network connectivity.
To use this feature, set ip_allocation = 'none'
on the neutron port before passing it to nova to use when creating a VM instance or attaching the port to an existing instance.
New enlightenments to the libvirt XML for Windows guests in RHOSO 18.0.0
This update adds the following enlightenments to the libvirt XML for Windows guests:
- vpindex
- runtime
- synic
- reset
- frequencies
- tlbflush
- ipi
This adds to the list of existing enlightenments:
- relaxed
- vapic
- spinlocks retries
- vendor_id spoofing
New default for managing instances on NUMA nodes
In RHOSP 17.1.4, the default was to pack instances on NUMA nodes.
In RHOSO 18.0, the default has been changed to balance instances across NUMA nodes. To change the default, and pack instances on NUMA nodes, set
[compute] packing_host_numa_cells_allocation_strategy = True
in both the scheduler and compute node nova.conf
Rebuild a volume-backed instance with a different image
This update adds the ability to rebuild a volume-backed instance from a different image.
Before this update, you could only rebuild a volume-backed instance from the original image in the boot volume.
Now you can rebuild the instance after you have reimaged the boot volume on the cinder side.
This feature requires API microversion 2.93 or later.
Archive 'task_log' database records
This enhancement adds the --task-log
option to the nova-manage db archive_deleted_rows
CLI. When you use the --task-log
option, the task_log
table records get archived while archiving the database. This option is the default in the nova-operator database purge cron job. Previously, there was no method to delete the task_log
table without manual database modification.
You can use the --task-log
option with the --before
option for records that are older than a specified <date>
. The updated_at
field is compared to the specifed <date>
to determine the age of a task_log
record for archival.
If you configure nova-compute
with [DEFAULT]instance_usage_audit = True
, the task_log
database table maintains an audit log of --task-log
use.
Support for virtual IOMMU device
The Libvirt driver can add a virtual IOMMU device to guests. This capability applies to x86 hosts that use the Q35 machine type. To enable the capability, provide the hw:viommu_model
extra spec or equivalent image metadata property hw_viommu_model
.The following values are supported: intel
, smmuv3
, virtio
, auto
. The default value is auto
, which automatically selects virtio
.
Due to the possible overhead introduced with vIOMMU, enable this capability only for required workloads.
More options for the server unshelve
command
With this update, new options are added to the server unshelve
command in RHOSO 18.0.0.
The --host
option allows adminstirators to specify a destination host. The --no-availability-zone
option allows administrators to specify the availability zone. Both options require the server to be in the SHELVED_OFFLOADED
state and the Compute API version to be 2.91
or greater.
Support for the bochs
libvirt video model
This release adds the ability to use the bochs
libvirt video model. The bochs
libvirt video model is a legacy-free video model that is best suited for UEFI guests. In some cases, it can be usable for BIOS guests, such as when the guest does not depend on direct VGA hardware access.
Schedule archival and purge of deleted rows from Compute service (nova) cells
The nova-operator now schedules a periodic job for each Compute service (nova) cell to archive and purge the deleted rows from the cell database. The frequency of the job and the age of the database rows to archive and purge can be fine tuned in the {{OpenStackControlPlane.spec.nova.template.cellTemplates[].dbPurge}}
structure for each cell in the cellTemplates.
3.3.3.2. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Migrating paused instance no longer generates error messages
Before this update, live migration of a paused instance with live_migration_permit_post_copy=True in nova.conf caused the libvirt driver to erroneously generate error messages similar to [1].
Now the error message is not generated when you live migrate a paused instance with live_migration_permit_post_copy=True.
[1] Error message example: "Live Migration failure: argument unsupported: post-copy migration is not supported with non-live or paused migration: libvirt.libvirtError: argument unsupported: post-copy migration is not supported with non-live or paused migration."
No network block device (NBD) live migration with TLS enabled
In RHOSO 18.0 Beta, a bug prevents you from using network block device (NBD) to live migrate storage between Compute nodes with TLS enabled. See https://issues.redhat.com/browse/OSPRH-6931.
This has now been resolved and live migration with TLS enabled is supported with local storage.
Cannot delete instance when cpu_power_managment
is set to true
in the rhos-18.0.0 beta release, a known issue was discovered preventing the deletion of an instance shortly after it was created if power management was enabled.
This has now been fixed in the rhoso-18.0.0 release
Jira:OSPRH-7103
3.3.3.3. Technology Previews
This part provides a list of all Technology Previews available in Red Hat OpenStack Services on OpenShift 18.0.
For information on the scope of support for Technology Preview features, see Example.
Technology preview of PCI device tracking in Placement service
RHOSO 18.0.0 introduces a technology preview of the ability to track PCI devices in the OpenStack Placement service.
Tracking PCI devices in the Placement service enables you to use granular quotas on PCI devices when combined with the Unified Limits Technology Preview.
PCI tracking in the Placement service is disabled by default and is limited to flavor-based PCI passthrough. Support for the Networking service (neutron) SRIOV ports is not implemented, but is required before this feature is fully supported.
Use of Identity service (Keystone) unified limits in the Compute service (nova)
This RHOSO release supports Identity service unified limits in the Compute service. Unified limits centralize management of resource quota limits in the Identity service (Keystone) and enable flexibility for users to manage quota limits for any Compute service resource being tracked in the Placement service.
3.3.3.4. Removed functionality
This part provides an overview of functionality that has been removed in Red Hat OpenStack Services on OpenShift 18.0.
Removed functionality is no longer supported in this product and is not recommended for new deployments.
Keypair generation removed from RHOSO 18
Keypair generation was deprecated in RHOSP 17 and has been removed from RHOSO 18. Now you need to precreate the keypair by the SSH command line tool ssh-keygen
and then pass the public key to the nova API.
i440fx PC machine type no longer tested or supported
In RHOSP 17, the i440fx PC machine type, pc-i440fx, was deprecated and Q35 became the default machine type for x86_64.
In RHOSP 18, the i440fx PC machine type is no longer tested or supported.
The i440fx PC machine type is still available for use under a support exception for legacy applications that cannot function with the Q35 machine type. If you have such a workload, contact Red Hat support to request a support exception.
With the removal of support for the i440fx PC machine type from RHOSP, you cannot use pc-i440fx to certify VNFs or third-party integrations. You must use the Q35 machine type.
Jira:OSPRH-7373
Unsupported: vDPA and Hardware offload OVS are unsupported
Hardware offload OVS consists of processing network traffic in hardware with the kernel swtichdev and tcflower protocols.
vDPA extends Hardware offload OVS by providing a vendor-neutral virtio net interface to the guest, decoupling the workload from the specifics fo the host hardware instead of presenting a vendor-specific virtual function.
Both Hardware offload OVS and vDPA are unsupported in RHOSO 18.0 with no upgrade path available for existing users.
At this time there is no plan to reintroduce this functionality or continue to invest in new features related to vdpa or hardware offloaded ovs.
If you have a business requirement for these removed features, please reach out to Red Hat support or your partner and Technical Account Manager so that Red Hat can reassess the demand for these features for a future RHOSO release.
Jira:OSPRH-7829
3.3.3.5. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Setting hw-architecture
or architecture
on Image service (glance) image does not work as expected
In RHOSO 18.0, the image metadata prefilter is enabled by default. RHOSO does not support emulation of non-native architectures. As part of the introduction of emulation support upstream, the image metadata prefilter was enhanced to support the scheduling of instances based on the declared VM architecture, for example hw_architecture=x86_64
.
When nova was enhanced to support emulating non-native architecture via image properties, a bug was introduced, because the native architecture was not reported as a trait by the virt driver.
Therefore, by default, support for setting hw_architecture
or architecture
on an image was rendered inoperable.
To mitigate this bug, you have two choices:
-
Unset the
architecture
/hw_architecture
image property. RHOSO supports only one architecture, x86_64. There is no valid use case that requires this to be set for an RHOSO cloud, so all hosts will be x86_64. Disable the image metadata prefilter in the
CustomServiceConfig
section of the nova scheduler:[scheduler] image_metadata_prefilter=false
QEMU process failure
A paused instance that uses local storage cannot be live migrated more than once. The second migration causes the QEMU process to crash and nova puts the instance to ERROR state.
Workaround: if feasible, unpause the instance temporarily then pause it again before the second live migration.
It is not always feasible to unpause an instance. For example, suppose the instance uses a multi-attach cinder volume, and pause is used to limit the access to that volume to a single instance while the other is kept in paused state. In this case, unpausing the instance is not a feasible workaround.
Compute service power management feature disabled by default
The Compute service (nova) power management feature is disabled by default. You can enable it with the following nova-compute configuration.
[libvirt] cpu_power_management = true cpu_power_management_strategy = governor
The default cpu_power_management_strategy cpu_state is not supported at the moment due to a bug that causes NUMA resource tracking issues, as all disable CPUs are reported on NUMA node 0 instead of on the correct NUMA node.
3.3.4. Data plane
3.3.4.1. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Using the download-cache
service prevents Podman from pulling images for data plane deployment
Do not list the download-cache
service in spec.services of the OpenStackDataPlaneNodeSet
. If you list download-cache
in OpenStackDataPlaneNodeSet
, Podman can not pull the container images required by the data plane deployment.
Workaround: Omit the download-cache
service from the default services list in OpenStackDataPlaneNodeSet
.
Jira:OSPRH-9500
3.3.5. Hardware Provisioning
3.3.5.1. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Increased EFI partition size
Before RHOSP 17.1.4, the EFI partition size of an overcloud node was 16MB. With this update, the image used for provisioned EDPM nodes now has an EFI partition size of 200MB to align with RHEL and to accommodate firmware upgrades.
3.3.6. Networking
3.3.6.1. New features
This part describes new features and major enhancements introduced in Red Hat OpenStack Services on OpenShift 18.0.
Octavia Operator availability zones
The Octavia Management network created and managed by the Octavia operator requires that the OpenStack routers and networks are scheduled on the OVN controller on the OpenShift worker nodes.
If the OpenStack Networking Service (neutron) is configured with non-default availability zones, the OVN controller pod on the OpenShift worker and Octavia must be configured with the same availability zone.
Example:
ovn: template: ovnController: external-ids: availability-zones: - zone1 octavia: template: lbMgmtNetwork: availabilityZones: zone1
3.3.6.2. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
OVN pod no longer goes into loop due to NIC Mapping
When using a large number of NIC mappings, OVN could go into a creation loop. This is now fixed
Jira:OSPRH-7480
3.3.6.3. Technology Previews
This part provides a list of all Technology Previews available in Red Hat OpenStack Services on OpenShift 18.0.
For information on the scope of support for Technology Preview features, see Example.
QoS minimum bandwidth policy (technology preview)
In RHOSO 18.0.0, a technology preview is available for the Networking service (neutron) for QoS minimum bandwidth for placement reporting and scheduling.
Load-balancing service (Octavia) support of multiple VIP addresses
This update adds a technology preview of support for multiple VIP addresses allocated from the same Neutron network for the Load-balancing service.
You can now specify additional subnet_id/ip_address pairs for the same VIP port. This makes it possible to configure the Load-balancing service with both IPv4 and IPv6 exposed to both public and private subnets.
3.3.6.4. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Delayed OVN database update after oc patch
command
Any custom configuration settings applied with 'oc patch …' command do not affect neutron ovn databases until 10 minutes have passed.
Workaround: After you replace old pods using the oc patch …
command, delete the new neutron pod(s) manually using oc delete pod …
command.
The pod deletion forces a new configuration to be set without the delay issue.
MAC_Binding aging functionality missing in RHOSO 18.0.0
The MAC_Binding aging functionality that was added in OSP 17.1.2 is missing from 18.0 GA. A fix is in progress.
10-minute delay between 'oc patch`command and update of OVN databases
Custom configuration settings applied with the 'oc patch' command do not affect the Networking service (neutron) OVN databases until 10 minutes have passed.
Workaround: After the old Networking service pods are replaced new pods after an 'oc patch' command operation, delete the new Networking service pods manually using the 'oc delete pod' command.
This deletion forces a new configuration to be set without the delay issue.
Metadata rate-limiting feature
Metadata rate-limiting is not available in RHOSO 18.0.0. A fix is in progress.
Jira:OSPRH-9569
3.3.7. Network Functions Virtualization
3.3.7.1. New features
This part describes new features and major enhancements introduced in Red Hat OpenStack Services on OpenShift 18.0.
AMD CPU powersave profiles
A power save profile, cpu-partitioning-powersave, was introduced in Red Hat Enterprise Linux 9 (RHEL 9), and made available in Red Hat OpenStack Platform (RHOSP) 17.1.3.
This TuneD profile is the base building block for saving power in NFV environments. RHOSO 18.0 adds cpu-partitioning-powersave support for AMD CPUs.
Jira:OSPRH-2268
3.3.7.2. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Physical function (PF) MAC address now matches between VM instances and SR-IOV physical functions (PFs)
This update fixes a bug that caused a PF MAC address mismatch between VM instances and SR-IOV PFs (Networking service ports with vnic-type
set to direct-physical
).
In the RHOSO 18.0 Beta release, a bug in the Compute service (nova) prevented the MAC address of SR-IOV PFs from being updated correctly when attached to a VM instance.
Now the MAC address of the PF is set on the corresponding neutron port.
3.3.7.3. Technology Previews
This part provides a list of all Technology Previews available in Red Hat OpenStack Services on OpenShift 18.0.
For information on the scope of support for Technology Preview features, see Example.
In RHOSO 18.0, a technology preview is available for the nmstate provider back-end in os-net-config
.
This technology preview of nmstate and NIC hardware offload has known issues that make it unsuitable for production use. For production, use the openstack-network-scripts
package rather than nmstate and NetworkManager.
There is a production-ready native nmstate mode you can select during installation, but network configuration, which must be provided in nmstate format, is not backwards-compatible with templates from TripleO. It also lacks certain features that os-net-config provides, such as NIC name mapping or DSCP configuration.
Data Center Bridge (DCB)-based QoS settings technology preview
Specific to port/interface, DCB-based QoS settings are now available as a technology preview as part of the os-net-config
tool’s network configuration template. For more information, see this knowledge base article: https://access.redhat.com/articles/7062865
Jira:OSPRH-2889
3.3.7.4. Deprecated functionality
This part provides an overview of functionality that has been deprecated in Red Hat OpenStack Services on OpenShift 18.0.
Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments.
TimeMaster service is deprecated in RHOSO 18.0
In RHOSO 18.0, support for the TimeMaster service is deprecated. Bug fixes and support are provided through the end of the RHOSO 18.0 lifecycle, but no new feature enhancements will be made.
Jira:OSPRH-8244
3.3.7.5. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Do not use virtual functions (VF) for the RHOSO control plane interface
This RHOSO release does not support use of VFs for the RHOSO control plane interface.
Bonds require minimum of two interfaces
If you configure an OVS or DPDK bond, always configure at least two interfaces. Bonds with only a single interface do not function as expected.
3.3.8. High availability
3.3.8.1. New features
This part describes new features and major enhancements introduced in Red Hat OpenStack Services on OpenShift 18.0.
Password rotation
This update introduces the ability to generate and rotate OpenStack database passwords.
3.3.9. Storage
3.3.9.1. New features
This part describes new features and major enhancements introduced in Red Hat OpenStack Services on OpenShift 18.0.
Shared File Systems support for scaleable CephFS-NFS
The Shared File Systems service (manila) now supports a scaleable CephFS-NFS service. In earlier releases of Red Hat OpenStack Platform, only active/passive high-availability that was orchestrated with Director, using Pacemaker/Corosync, was supported. With this release, deployers can create active/active clusters of CephFS-NFS and integrate these clusters with the Shared File Systems service for improved scalability and high availability for NFS workloads.
Block Storage service (cinder) volume deletion
With this release, the Block Storage service RBD driver takes advantage of recent Ceph developments to allow RBD volumes to meet normal volume deletion expectations.
In previous releases, when the Block Storage service used an RBD (Ceph) volume back end, it was not always possible to delete a volume.
project_id
in API URLs now optional
You are no longer required to include project_id
in Block Storage service (cinder) API URLs.
Dell PowerStore storage systems driver
A new share driver has been added to support Dell PowerStore storage systems with the Shared File Systems service (Manila) service.
Jira:OSPRH-4425
Dell PowerFlex storage systems driver
A new share driver has been added to support Dell PowerFlex storage systems with the Shared File Systems service (Manila) service.
Jira:OSPRH-4426
openstack-must-gather SOS report support
You can now collect diagnostic information about your RHOSO deployment using the openstack-must-gather.
You can retrieve SOS reports for both the RHOCP control plane and RHOSO data plane nodes using a single command, and options are available to dump specific information related to a particular deployed service.
3.3.9.2. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Key Manager service configuration fix enables Image service image signing and verification
With this fix, the Image service (glance) is automatically configured to interact with the Key Manager service (barbican), and you can now perform encrypted image signing and verification.
Fixed faulty share creation in the NetApp ONTAP driver when using SVM scoped accounts
Due to a faulty kerberos enablement check upon shares creation, the NetApp ONTAP driver failed to create shares when configured with SVM scoped accounts. A fix has been committed to openstack-manila and shares creation should work smoothly.
Jira:OSPRH-8044
3.3.9.3. Technology Previews
This part provides a list of all Technology Previews available in Red Hat OpenStack Services on OpenShift 18.0.
For information on the scope of support for Technology Preview features, see Example.
Deployment and scale of Object Storage service
This feature allows for the deployment and scale of Object Storage service (swift) data on data plane nodes. This release of the feature is a technology preview.
Jira:OSPRH-1307
3.3.9.4. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
RGW does not pass certain Tempest object storage metadata tests
Red Hat OpenStack Services on OpenShift 18.0 supports Red Hat Ceph Storage 7. Red Hat Ceph Storage 7 RGW does not pass certain Tempest object storage metadata tests as tracked by the following Jiras:
https://issues.redhat.com/browse/RHCEPH-6708https://issues.redhat.com/browse/RHCEPH-9119https://issues.redhat.com/browse/RHCEPH-9122https://issues.redhat.com/browse/RHCEPH-4654
Jira:OSPRH-7464
Image import remains in importing
state after conversion with ISO image format
When you use image conversion with the ISO image format, the image import operation remains in an "importing" state.
*Workaround:* If your deployment supports uploading images in ISO format, you can use the `image-create` command to upload ISO images as shown in the following example (instead of using image conversion with the `image-create-via-import` command).
Example:
glance image-create \ --name <iso_image> \ --disk-format iso \ --container-format bare \ --file <my_file.iso>
-
Replace
<iso_image>
with the name of your image. -
Replace
<my_file.iso>
with the file name for your image.
3.3.10. Dashboard
3.3.10.1. New features
This part describes new features and major enhancements introduced in Red Hat OpenStack Services on OpenShift 18.0.
Hypervisor status now includes vCPU and pCPU information
Before this update, pCPU usage was excluded from the hypervisor status in the Dashboard service (horizon) even if the cpu_dedicated_set
configuration option was set in the nova.conf
file. This enhancement uses the Placement API to display information about vCPUs and pCPUs. You can view vCPU and pCPU usage diagrams under the Resource Providers Summary and find more information on vCPUs and pCPUs on the new Resource provider tab at the Hypervisors panel.
With this update, you can now customize the OpenStack Dashboard (horizon) container.
The customization can be performed by using the extra mounts feature to add or change files inside of the Dashboard container.
TLS everywhere in RHOSO Dashboard Operator
With this update, the RHOSO Dashboard (horizon) Operator automatically configures TLS-related configuration settings.
These settings include certificates and response headers when appropriate, including the secure cookies and HSTS headers for serving over HTTPS.
3.3.10.2. Bug fixes
This part describes bugs fixed in Red Hat OpenStack Services on OpenShift 18.0 that have a significant impact on users.
Host spoofing protective measure
Before this update, the hosts configuration option was not populated with the minimum hosts necessary to protect against host spoofing.
With this update, the hosts configuration option is now correctly populated.
Dashboard service operators now include HSTS header
Before this update, HSTS was only enabled in Django through the Dashboard service (horizon) application. However, user HTTPS sessions were going through the OpenShift route, where HSTS was disabled. With this update, HSTS is enabled on the OpenShift route.
3.4. Release information RHOSO 18.0 Beta
3.4.1. Advisory list
This release of Red Hat OpenStack Services on OpenShift (RHOSO) includes the following advisories:
- RHEA-2024:3646
- RHOSO 18.0 Beta container images, data plane 1.0 Beta
- RHEA-2024:3647
- RHOSO 18.0 Beta container images, control plane 1.0 Beta
- RHEA-2024:3648
- RHOSO 18.0 Beta service container images
- RHEA-2024:3649
- RHOSO 18.0 Beta packages
3.4.2. Compute
3.4.2.1. New features
This part describes new features and major enhancements introduced in Red Hat OpenStack Services on OpenShift 18.0.
You can schedule archival and purge of deleted rows from Compute service (nova) cells
The nova-operator now schedules a periodic job for each Compute service (nova) cell to archive and purge the deleted rows from the cell database. The frequency of the job and the age of the database rows to archive and purge can be fine tuned in the {{OpenStackControlPlane.spec.nova.template.cellTemplates[].dbPurge}}
structure for each cell in the cellTemplates.
3.4.2.2. Deprecated functionality
This part provides an overview of functionality that has been deprecated in Red Hat OpenStack Services on OpenShift 18.0.
Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments.
i440fx PC machine type no longer tested or supported
In RHOSP 17, the i440fx PC machine type, pc-i440fx, was deprecated and Q35 became the default machine type for x86_64.
In RHOSP 18, the i440fx PC machine type is no longer tested or supported.
The i440fx PC machine type is still available for use under a support exception for legacy applications that cannot function with the Q35 machine type. If you have such a workload, contact Red Hat support to request a support exception.
With the removal of support for the i440fx PC machine type from RHOSP, you cannot use pc-i440fx to certify VNFs or third-party integrations. You must use the Q35 machine type.
Jira:OSPRH-7373
3.4.2.3. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
No network block device (NBD) live migration with TLS enabled
In RHOSO 18.0 Beta, a bug prevents you from using network block device (NBD) to live migrate storage between Compute nodes with TLS enabled. See https://issues.redhat.com/browse/OSPRH-6931.
This issue only affects storage migration when TLS is enabled. You can live migrate storage with TLS not enabled.
Do not mix NUMA and non-NUMA instances on same Compute host
Instances without a NUMA topology should not coexist with NUMA instances on the same host.
Cannot delete instance when cpu_power_managment
is set to true
When an instance is first started and the host core state is changed there is a short time period where it cannot be updated again. during this period instance deletion can fail. if this happens a second delete attempt should succeed after a short delay of a few seconds.
Jira:OSPRH-7103
3.4.3. Networking
3.4.3.1. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
OVN pod goes into loop due to NIC Mapping
When using a large number of NIC mappings, OVN might go into a creation loop.
Jira:OSPRH-7480
3.4.4. Network Functions Virtualization
3.4.4.1. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Listing physical function (PF) ports using neutron might show the wrong MAC
Lists of PF ports might show the wrong MAC.
3.4.5. Storage
3.4.5.1. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Image uploads might fail if a multipathing path for Block Storage service (cinder) volumes is offline
If you use multipath for Block storage service volumes, and you have configured the Block Storage service as the back end for the Image service (glance), image uploads might fail if one of the paths goes offline.
RGW does not pass certain Tempest object storage metadata tests
Red Hat OpenStack Services on OpenShift 18.0 supports Red Hat Ceph Storage 7. Red Hat Ceph Storage 7 RGW does not pass certain Tempest object storage metadata tests as tracked by the following Jiras:
https://issues.redhat.com/browse/RHCEPH-6708https://issues.redhat.com/browse/RHCEPH-9119https://issues.redhat.com/browse/RHCEPH-9122https://issues.redhat.com/browse/RHCEPH-4654
Jira:OSPRH-7464
Missing Barbican configuration in the Image service (glance)
The Image service is not automatically configured to interact with Key Manager (barbican), and encrypted image signing and verification fails due to the missing configuration.
Jira:OSPRH-7155
3.4.6. Release delivery
3.4.6.1. Removed functionality
This part provides an overview of functionality that has been removed in Red Hat OpenStack Services on OpenShift 18.0.
Removed functionality is no longer supported in this product and is not recommended for new deployments.
Removal of snmp
and snmpd
The snmp
service and snmpd
daemon are removed in RHOSO 18.0.
3.4.7. Integration test suite
3.4.7.1. Known issues
This part describes known issues in Red Hat OpenStack Services on OpenShift 18.0.
Tempest test-operator does not work with LVMS storage class
When the test-operator is used to run Tempest, it requests a “ReadWriteMany” PersistentVolumeClaim (PVC) which the LVMS storage class does not support. This causes the tempest-test pod to become stuck in the pending
state.
Workaround: Use the test-operator with a storage class supporting ReadWriteMany
PVCs. The test-operator should work with a ReadWriteOnce
PVC so the fixed version will no longer request a ReadWriteMany
PVC.