Chapter 7. Bug fixes


This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.20.

7.1. Multicloud Object Gateway

  • Noobaa certificate verification for NamespaceStore endpoints

    Previously, missing validation of CA bundle when mounting NamespaceStore endpoints caused failures in loading and consuming provided CA bundles. Validation for CA bundles has now been added to ensure proper certificate verification.

    (DFBUGS-2712)

  • Support for AWS region ap-east-2 in Noobaa operator

    Previously, the ap-east-2 region was missing from the MCG operator-supported regions list, preventing creation of a default BackingStore when deployed in this region. The missing region has now been added to the supported list.

    (DFBUGS-2802)

  • Noobaa no longer fails to issue deletes to RGW

    A configuration change caused delays in deleting large numbers of small objects from the underlying RGW storage. This impacted performance during high-volume delete operations. The issue was resolved by reverting the configuration change, eliminating the delay in deletion from the underlying storage.

    (DFBUGS-2916)

7.2. Disaster recovery

  • ACM console view persistence on hard refresh

    Previously, a hard refresh from the ACM console caused the view to revert to the OCP (local-cluster) console. This was because Multicluster Orchestrator console routes were not registered properly for ACM (all clusters) view, which disrupted the expected navigation behavior. The routing logic has now been corrected, and refreshing the browser no longer changes the active view. Users remain in the ACM console as intended.

    (DFBUGS-4061)

  • DR status now visible for VMs

    The DR Status was missing on the VM list page, and the Remove disaster recovery option was not available when managing the VMs protected using label selectors. This happened because the UI could not correctly identify the VM’s cluster and its DRPC.

    The issue was fixed by reading the VM cluster from the correct field and improving how DRPCs are parsed when label selectors are used. Now, both the DR Status and the Remove disaster recovery options work as expected.

    (DFBUGS-4286)

  • Disabling DR for a CephFS application with consistency groups enabled no longer leaves some resources behind

    Disabling DR for a CephFS application with consistency groups enabled no longer leaves any resources behind. Manual cleanup is no longer required.

    (DFBUGS-2950)

  • s3StoreProfile in ramen-hub-operator-config after upgrade from 4.18 to 4.19

    Previously, after upgrading from 4.18 to 4.19, the ramen-hub-operator-config ConfigMap was overwritten with default values from the Ramen-hub CSV. This caused loss of custom S3Profiles and other configurations added by the Multicluster Orchestrator (MCO) operator. The issue has been fixed to preserve custom entries during upgrade, preventing disruption in S3 profile configurations.

    (DFBUGS-3634)

  • virtualmachines.kubevirt.io resource no longer fails restore due to mac allocation failure on relocate

    Previously, when a virtual machine was relocated back to the preferred cluster, the relocation could fail because its MAC address was unavailable. This occurred if the virtual machine was not fully cleaned up on the preferred cluster after being failed over to the failover cluster. This cleanup process has been corrected, ensuring successful relocation to the preferred cluster.

    (BZ#2295404)

  • Failover process no longer fails when the ReplicationDestination resource has not been created yet

    Previously, if the user initiated a failover before the LastGroupSyncTime was updated, the failover process would fail. This failure was accompanied by an error message indicating that the ReplicationDestination does not exist.

    This issue has been resolved, and failover works as expected.

    (DFBUGS-632)

  • After Relocation of consistency groups based workload, synchronization no longer stops

    Previously, when applications using CephRBD volumes with volume consistency groups were running and the secondary managed cluster went offline, replication for these volumes could stop indefinitely—even after the secondary cluster came back online. During this condition, the Volume SynchronizationDelay alert was triggered, starting with a Warning status and later escalating to Critical, indicating replication had ceased for the affected volumes. This issue has been resolved to ensure replication resumes automatically when the secondary cluster is restored.

    (DFBUGS-3812)

7.3. Rook

  • Ceph monitor endpoints fully visible

    Previously, only one of the three Ceph monitor endpoints was showing up due to missing entries in the CSI ConfigMap. This meant only one of the mons was there for fault tolerance.

    The issue was fixed by adding all monitor endpoints to the ConfigMap. Now, all mons are visible, and CSI communication is fault-tolerant.

    (DFBUGS-4344)

7.4. OpenShit Data Foundation console

  • Fixed StorageSystem creation wizard issues

    Previously, the Network Type field for Host was missing, resulting in empty network details and a misleading tooltip that described Multus instead of the actual host configuration. This caused confusion in the summary view, where users saw no network information and an inaccurate tooltip.

    With this update, the tooltips were removed and replaced with radio buttons featuring correct labels and descriptions.

    (DFBUGS-2582)

  • Force delete option restored for stuck StorageConsumer

    Previously, users were unable to forcefully delete a StorageConsumer resource if it was stuck in a deletion state due to the presence of a deletionTimeStamp.

    This issue has been resolved by updating the Actions menu to enable Delete StorageConsumer even when a deletionTimeStamp is present. As a result, you can force delete StorageConsumer resources when required.

    (DFBUGS-2819)

  • Fix for Disaster Recovery misconfiguration after upgrade from v4.17.z to v4.18

    Previously, the upgrade process resulted in incorrect DR resource configurations, impacting workloads that rely on ocs-storagecluster-ceph-rbd and ocs-storagecluster-ceph-rbd-virtualization storage classes.

    With this fix, the DR resources are correctly configured after the upgrade.

    (DFBUGS-1804)

  • Warning message in the UI right after creation of StorageCluster no longer appears

    Previously, a warning popup appeared in the UI during the creation of a StorageSystem or StorageCluster. This was caused by the Virtualization StorageClass not being annotated with storageclass.kubevirt.io/is-default-virt-class: "true", by default, after deployment.

    With this fix, the required annotation is applied automatically, preventing unnecessary warnings.

    (DFBUGS-2921)

  • PVC type misclassification resolved in UI

    Previously, the UI was incorrectly displaying block PVCs as filesystem PVCs due to outdated filtering method that relied on assumptions based on VRG naming conventions. This led to confusion as the PVC type was inaccurately reported.

    To address this, the filter distinguishing block and filesystem PVCs is removed, acknowledging that a group can contain both types. This change eliminates misclassification and ensures accurate representation of PVCs in the UI.

    (DFBUGS-4219)

  • Bucket Lifecycle rule deletion now supported

    Previously it was not possible to delete the last remaining bucket lifecycle rule due to a backend error—attempting to update the LifecycleConfiguration with empty rules triggered a 500 response.

    This has been fixed by switching to deleteBucketLifecycle for cases where the entire lifecycle configuration needs to be cleaned up. As a result, you can delete all bucket lifecycle rules without encountering errors.

    (DFBUGS-2960)

  • CephFS volume filtering corrected in the UI

    Previously, the UI filtering for CephFS volumes was not functioning correctly and also mistakenly excluded the CephFS PVCs when the "block" option was selected. This was due to the outdated filtering method based on VRG naming assumptions that no longer apply.

    To resolve this, the block/filesystem filter is removed, recognizing that a group might contain both types of PVCs. This fix eliminates misclassification and ensures accurate display of CephFS volumes in the UI.

    (DFBUGS-4065)

  • Resource distribution disabled for internal client

    Previously, the UI allowed users to distribute resources, such as StorageClasses, to the local or internal client, including adding or removing them. While backend logic would automatically restore removed resources, this behavior was misleading from a user experience perspective.

    To improve clarity, the internal client row has been disabled in the client table, and the Distribute resources option has been removed from the Action menu for internal StorageConsumer entries. You can no longer perform resource distribution actions on the internal client using the UI.

    (DFBUGS-2567)

  • Alert for essential OpenShift Data Foundation pods down during capacity addition

    Previously, there was no test to check if the essential OpenShift Data Foundation pods were working, leading to an error when adding capacity.

    To address this issue, if essential pods are down when attempting to add capacity, the user is alerted and not allowed to proceed.

    (DFBUGS-1755)

  • Support external Red Hat Ceph Storage deployment on KubeVirt nodes

    Previously, on OpenShift Container Platform deployed on KubeVirt nodes, there was no option to deploy OpenShift Data Foundation with external Red Hat Ceph Storage (RHCS) due to the Infrastructure CR reporting oVirt and KubeVirt as separate platforms.

    With this fix, KubeVirt is added to the allowed list of platforms. As a result, you can create or link external RHCS storage systems from the UI.

    (DFBUGS-4018)

7.5. OCS operator

  • Missing Toleration for Prometheus Operator in ROSA HCP Deployments

    Previously, it was required to manually patch the pod after creation to apply tolerations.

    With this fix, an issue where the prometheus-operator pod in Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) that was missing the required tolerations was fixed. The tolerations are correctly applied during deployment, eliminating the need for manual intervention.

    (DFBUGS-1272)

  • Service "ocs-provider-server" is invalid: spec.ports[0].nodePort: Invalid value: 31659: provided port is already allocated error no longer appears while reconciling

    Previously, the ocs-operator deployed a service using port 31659, which could conflict with an existing nodePort service that is already using the same port. This conflict caused the ocs-operator deployment to fail, resulting in upgrade reconciliation getting stuck.

    With this fix, the port allocation is handled more safely to avoid clashes with existing services.

    (DFBUGS-1831)

  • ocs-metrics-exporter inherits node selector

    Previously, the ocs-metrics-exporter did not inherit the node selector configuration, causing scheduling issues. This has been resolved by ensuring the node selector is properly applied, as detailed in this Red Hat Solution.

    (DFBUGS-3728)

7.6. Ceph monitoring

  • Clone count alert now fires promptly when 200+ clones are created

    The clone count alert was previously stuck in a Pending state and failed to fire in a timely manner when over 200 clones were created. This was caused by the alert’s firing threshold being set to 30 minutes, resulting in a long delay. To resolve this, the firing time was reduced from 30 minutes to 30 seconds. As a result, the alert now fires as expected, providing timely notifications when the clone count exceeds the threshold.

    (DFBUGS-3869)

  • Correct runbook URL for HighRBDCloneSnapshotCount alert

    The runbook URL linked to the 'HighRBDCloneSnapshotCount' alert was previously incorrect, leading users to a non-existent help page. This issue has been fixed by updating the alert configuration with the correct URL.

    (DFBUGS-3949)

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat