Este conteúdo não está disponível no idioma selecionado.

Chapter 6. Bug fixes


This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.19.

6.1. Multicloud Object Gateway

  • Using PostgresSQL through environment variables

    Previously, there was a risk of exposing PostgresSQL connection details as PostgresSQL connection details were passed as an environment variable.

    With this fix, Postgres secret is passed as volume mount instead of the environment variable.

    (DFBUGS-1466)

  • Backingstore is stuck in Rejected phase due to IO Errors

    Previously, when Multicloud Object Gateway (MCG) detected errors while accessing data on a backing store, MCG disconnected the backing store to force it to reload to clear the issue. This resulted in backing store to be in a rejected state and not serve due to false positives.

    With this fix, the disconnection behavior of the backing store is fine tuned to avoid the false positives.

    (DFBUGS-1511)

  • "ap-southeast-7" region is missing from noobaa-operator code

    Previously, default backing store was not created when deployed in the new ap-southeast-7 and mx-central-1 AWS regions as these regions were missing from the MCG operator supported regions.

    With this fix, the two regions were added to the list of supported features.

    (DFBUGS-1550)

  • Multicloud Object Gateway Prometheus tags not updated after bucket creation

    Previously, the updated bucket tagging was not reflected in exported Prometheus metrics of MCG.

    With this fix, the update tagging while collecting the metrics is exposed to Prometheus.

    (DFBUGS-1615)

  • Multicloud Object Gateway backing store PV-pool Rejected - setting permissions of /noobaa_storage

    Previously, where there were a lot of blocks under noobaa_storage directory, after every pod restart, a long time was to taken to start the pod. This was because the MCG PV pool pod was trying to recursively change permission to noobaa_storage directory under the PV before starting the pod.

    With this fix, the requirement to change permission was removed as it is no longer needed.

    (DFBUGS-1661)

  • Postgres queries on object metadata and data blocks take too long to complete

    Previously, when the MCG DB was large, the entire system experienced slowness and operations failed as Agent Blocks Reclaimer in MCG looked for deleted unreclaimed blocks in the MCG DB. And, the query used was not indexed.

    With this fix, a new index is added to the MCG DB to optimize the query.

    (DFBUGS-1765)

  • MCG long query causing timeouts on endpoints

    Previously, slowness was seen in all flows that used MCG DB due to short delays of object reclaimer and as there were no optimized indexes for the object reclaimed. This caused extra load to MCG DB.

    With this fix, the timeout interval for the object reclaimed runs and indexes for queries are changed. As a result, slowness is no longer seen in the flows that use MCG DB.

    (DFBUGS-2058)

6.2. Ceph container storage interface (CSI) Driver

  • kubelet_volume metrics not reported for some CephFS PVC - NodeGetVolumeStats : health-check has not responded

    Previously, PV health metrics were not reported for certain CephFS pods even though they were mounted because an issue in the Ceph CSI driver caused PV health metrics to return an error for CephFS pods in certain scenarios.

    With this fix, the issue with the Ceph CSI driver is fixed and as a result, all health metrics for CephFS PVs is successfully reported in all scenarios.

    (DFBUGS-2091)

6.3. Ceph container storage interface (CSI) addons

  • ceph-csi-controller-manager pods OOMKilled

    Previously, when ReclaimSpace operation was run on PVCs provisioned by driver other than RADOS block device (RBD), the csi-addons controller crashed due to panic because of incorrect logging.

    With this fix, the logging format which caused the panic was fixed and as a result, the csi-addons controller handles the scenario gracefully.

    (DFBUGS-2142)

6.4. Ceph monitoring

  • Prometheus rule evaluation errors

    Previously, the Prometheus query evaluation failed with the error:'many-to-many matching not allowed: matching labels must be unique on one side' as unique label was missing from alert query.

    With this fix, added the unique 'managedBy' label to the query, which brought uniqueness to the query-result and resolved the issue.

    (DFBUGS-2571)

Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat