Este conteúdo não está disponível no idioma selecionado.

Chapter 7. Bug fixes


This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.18.

7.1. Disaster recovery

  • Volsync in DR dashboard reports operator degraded

    Previously, Red Hat Advanced Cluster Management for Kubernetes (RHACM) 2.13 deployed the Volsync operator on a managed cluster without creating the ClusterServiceVersion (CSV) custom resource (CR). As a result, OpenShift did not generate csv_succeeded metrics for Volsync and hence the ODF-DR dashboard did not display the health status of the Volsync operator.

    With this fix, for Volsync, the csv_succeeded metric is replaced with kube_running_pod_ready. Therefore, the RHACM metrics whitelisting ConfigMap is updated and the ODF-DR dashboard is able to monitor the health of the Volsync operator effectively.

    (DFBUGS-1293)

  • Replication using Volsync requires PVC to be mounted before PVC is synchronized

    Previously, a PVC which was not mounted would not be synced to the secondary cluster.

    With this fix, ODF-DR syncs the PVC even when it is not part of the PVCLabelSelector.

    (DFBUGS-580)

7.2. Multicloud Object Gateway

  • Attempting to delete a bucketclass or OBC that do not exist does not result in an error in MCG CLI

    Previously, an attempt to delete a bucketclass or object bucket claim (OBC) that does not exist using the MCG CLI did not result in an error.

    With this fix, error messages on CLI deletion of bucketclasses and OBCs are improved.

    (DFBUGS-201)

  • 502 Bad Gateway observed on s3 get operation: noobaa is throwing error at 'MapClient.read_chunks: chunk ERROR Error: had chunk errors chunk

    Previously, the object was corrupted due to a race condition within MCG between a canceled part of an upload and the dedup flow finding a match. The said part would be flagged as a duplicate and then canceled and reclaimed leaving the second duped part pointing to a reclaimed data which is no longer valid.

    With this fix, deduping with chunks that are not yet marked as finished uploads is avoided and a time buffer is added after completion to ensure chunks are alive and can be deduped into.

    (DFBUGS-216)

  • Namespace store stuck in rejected state

    Previously, during monitoring of NSStore when MCG tries to verify access and existence of the target bucket, certain errors were not ignored even though they should have been ignored.

    With this fix, issue report on read-object_md is prevented when the object does not exist.

    (DFBUGS-700)

  • Updating bucket quota always result in 1PB quota limit

    Previously, MCG bucket quota resulted in a 1PB quota limit regardless of the desired value.

    With this fix, the correct value is set on the bucket quota limit.

    (DFBUGS-1173)

  • Using PutObject via boto3 >= 1.36.0 results in InvalidDigest error

    Previously, PUT requests with clients that used the upgraded AWS SDK or CLI resulted in error because AWS SDK or CLI changed the default S3 client behavior to always calculate a checksum by default for operations that support it.

    With this fix, the PUT requests from S3 clients are allowed with the changed behavior.

    (DFBUGS-1513)

7.3. Ceph

  • with panic_on_warn set, the kernel ceph fs module panicked in ceph_fill_file_size

    Previously, kernel panic with the note not syncing: panic-on_warn_set occurred due to a specific hard-to-reproduce CephFS scenario.

    With this fix, the RHEL kernel was fixed and as a result, the specific CephFS scenario no longer occurs.

    (DFBUGS-551)

7.4. Ceph container storage interface (CSI) operator

  • ceph-csi-controller-manager pods OOMKilled

    Previously, ceph-csi-controller-manager pods were OOMKilled because these pods tried to cache all configmaps in the cluster on installing OpenShift Data Foundation.

    With this fix, the cache is scoped only to the namespace where ceph-csi-controller-manager pod is running. As a result, memory usage by pods is stable and pods are not OOMKilled.

    (DFBUGS-938)

7.5. OCS Operator

  • rook-ceph-mds pods scheduled on the same node as placement anti-affinity is preferred, not required

    Previously, MDS pods for an active MDS daemon could be scheduled in the same failure domain, as MDS pods had preferred pod anti-affinity.

    With this fix, for activeMDS = 1, required anti-affinity is applied. For activeMDS > 1, preferred anti-affinity remains. As a result when activeMDS = 1, the two MDS pods of the active daemon will have required anti-affinity, ensuring they are not scheduled in the same failure domain and when activeMDS >1, the anti affinity will be preferred and MDS active and standby pair can be scheduled on the same nodes.

    (DFBUGS-1509)

7.6. OpenShift Data Foundation console

  • Tooltip rendered behind other components

    Previously, when graphs or charts were hovered over, tooltips were hidden behind the graphs or charts and the values were not visible (on the dashboards). This was due to the PatternFly v5 library issue.

    With this fix, PatternFly is updated to a minor version and as a result, tooltips are clearly visible.

    (DFBUGS-156)

  • BackingStore details shows incorrect provider

    Previously, the BackingStore details page showed incorrect provider due to the incorrect mapping of the provider name.

    With this fix the UI logic was updated to display the provider name correctly.

    (DFBUGS-353)

  • Error message that Popup fail to alert on rule

    Previously, OBCs could be created with the same name in different namespaces without being notified, which led to potential conflicts or unintended behavior. This was because the user interface did not track object bucket claims (OBCs) across namespaces. This allowed duplicate OBC names without a proper warning.

    With this fix, the validation logic is updated to properly check and notify when you attempt to create an OBC with a duplicate name. A clear warning is displayed if an OBC with the same name exists, preventing confusion and ensuring correct behavior.

(DFBUGS-410)

  • A 404: Not Found message is briefly displayed for a few seconds when clicking on the ‘Enable Encryption’ checkbox during StorageClass creation

    Previously, "404: Not Found" message was briefly displayed for a few seconds while enabling encryption by using the ‘Enable Encryption’ checkbox during new StorageClass creation.

    With this fix, the conditions that caused the issue was fixed. As a result, "404: Not Found" and directly the configuration form is displayed after some loading state.

    DFBUGS-489

  • Existing warning alert "Inconsistent data on target cluster" does not go away

    Previously, when an incorrect target cluster is selected for failover/relocate operations, the existing warning alert "Inconsistent data on target cluster" did not disappear.

    With this fix, the warning alert is refreshed correctly when changing the target cluster for subscription apps. As a result, the alerts no longer persists unnecessarily when failover/relocation is triggered for discovered applications.

    (DFBUGS-866)

7.7. Rook

  • rook-ceph-osd-prepare-ocs-deviceset pods produce duplicate metrics

    Previously, alerts were raised from kube-state-metrics because of the duplicate tolerations in the OSD prepare pods.

    With this fix, the completed OSD prepare pods that had duplicate tolerations are removed. As a result, duplicate alerts with upgrades are no longer raised.

    (DFBUGS-839)

7.8. Ceph monitoring

  • Prometheus rule evaluation errors

    Previously, a lot of PrometheusRuleFailures error logs and the affected alerts were not triggered because many alerts or rules queries which included the metric ceph_disk_occupation had a wrong or invalid label.

    With this fix, the erroneous label was corrected and the queries of the affected alerts were updated. As a result, prometheus rule evaluation is appropriate and all alerts are successfully deployed.

    (DFBUGS-789)

  • Alert "CephMdsCPUUsageHighNeedsVerticalScaling" not triggered when MDS usage is high

    Previously, ocs-operator was unable to read or deploy the malformed rule file and the alerts associated with this file were not visible. This was due to the wrong indentation of the PrometheusRule file, prometheus-ocs-rule.yaml.

    With this fix, the indentation is corrected and as a result, the PrometheusRule file is deployed successfully.

    (DFBUGS-951)

Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat