Este conteúdo não está disponível no idioma selecionado.

Chapter 8. Known issues


This section describes the known issues in Red Hat OpenShift Data Foundation 4.18.

8.1. Disaster recovery

  • Regional-DR upgrade with multipath devices or partitioned disks from v4.17 to v4.18 fails

    Regional-DR environments with multipath devices or partitioned disks should not upgrade from v4.17 to v4.18 due to known issues with Ceph. The issue will be fixed in 4.18 z-streams or a future release.

    (DFBUGS-1801)

  • Disaster Recovery is misconfigured after upgrade from v4.17.z to v4.18

    When ODF Multicluster Orchestrator and Openshift DR Hub Operator are upgraded from 4.17.z to 4.18, some of the Disaster Recovery resources are misconfigured in internal mode deployments. This impacts Disaster Recovery of workloads using ocs-storagecluster-ceph-rbd and ocs-storagecluster-ceph-rbd-virtualization StorageClasses.

    To workaround this, issue, follow the instructions in this knowledgebase article.

    (DFBUGS-1804)

  • ceph df reports an invalid MAX AVAIL value when the cluster is in stretch mode

    When a crush rule for a Red Hat Ceph Storage cluster has multiple "take" steps, the ceph df report shows the wrong maximum available size for the map. The issue will be fixed in an upcoming release.

    (DFBUGS-1748)

  • Both the DRPCs protect all the persistent volume claims created on the same namespace

    The namespaces that host multiple disaster recovery (DR) protected workloads, protect all the persistent volume claims (PVCs) within the namespace for each DRPlacementControl resource in the same namespace on the hub cluster that does not specify and isolate PVCs based on the workload using its spec.pvcSelector field.

    This results in PVCs that match the DRPlacementControl spec.pvcSelector across multiple workloads. Or, if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions.

    Workaround: Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line.

    Result: PVCs are no longer managed by multiple DRPlacementControl resources and do not cause any operation and data inconsistencies.

    (DFBUGS-1749)

  • MongoDB pod is in CrashLoopBackoff because of permission errors reading data in cephrbd volume

    The OpenShift projects across different managed clusters have different security context constraints (SCC), which specifically differ in the specified UID range and/or FSGroups. This leads to certain workload pods and containers failing to start post failover or relocate operations within these projects, due to filesystem access errors in their logs.

    Workaround: Ensure workload projects are created on all managed clusters with the same project-level SCC labels, allowing them to use the same filesystem context when failed over or relocated. Pods will no longer fail post-DR actions on filesystem-related access errors.

    (DFBUGS-1750)

  • Disaster recovery workloads remain stuck when deleted

    When deleting a workload from a cluster, the corresponding pods might not terminate with events such as FailedKillPod. This might cause delay or failure in garbage collecting dependent DR resources such as the PVC, VolumeReplication, and VolumeReplicationGroup. It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected.

    Workaround: Reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected.

    (DFBUGS-325)

  • Regional DR CephFS based application failover show warning about subscription

    After the application is failed over or relocated, the hub subscriptions show up errors stating, "Some resources failed to deploy. Use View status YAML link to view the details." This is because the application persistent volume claims (PVCs) that use CephFS as the backing storage provisioner, deployed using Red Hat Advanced Cluster Management for Kubernetes (RHACM) subscriptions, and are DR protected are owned by the respective DR controllers.

    Workaround: There are no workarounds to rectify the errors in the subscription status. However, the subscription resources that failed to deploy can be checked to make sure they are PVCs. This ensures that the other resources do not have problems. If the only resources in the subscription that fail to deploy are the ones that are DR protected, the error can be ignored.

    (DFBUGS-253)

  • Disabled PeerReady flag prevents changing the action to Failover

    The DR controller executes full reconciliation as and when needed. When a cluster becomes inaccessible, the DR controller performs a sanity check. If the workload is already relocated, this sanity check causes the PeerReady flag associated with the workload to be disabled, and the sanity check does not complete due to the cluster being offline. As a result, the disabled PeerReady flag prevents you from changing the action to Failover.

    Workaround: Use the command-line interface to change the DR action to Failover despite the disabled PeerReady flag.

    (DFBUGS-665)

  • Ceph becomes inaccessible and IO is paused when connection is lost between the two data centers in stretch cluster

    When two data centers lose connection with each other but are still connected to the Arbiter node, there is a flaw in the election logic that causes an infinite election between the monitors. As a result, the monitors are unable to elect a leader and the Ceph cluster becomes unavailable. Also, IO is paused during the connection loss.

    Workaround: Shutdown the monitors of any one of the data zone by bringing down the zone nodes. Additionally, you can reset the connection scores of surviving mon pods.

    As a result, monitors can form a quorum and Ceph becomes available again and IOs resume.

    (DFBUGS-425)

  • RBD applications fail to Relocate when using stale Ceph pool IDs from replacement cluster

    For the applications created before the new peer cluster is created, it is not possible to mount the RBD PVC because when a peer cluster is replaced, it is not possible to update the CephBlockPoolID’s mapping in the CSI configmap.

    Workaround: Update the rook-ceph-csi-mapping-config configmap with cephBlockPoolID’s mapping on the peer cluster that is not replaced. This enables mounting the RBD PVC for the application.

    (DFBUGS-527)

  • Information about lastGroupSyncTime is lost after hub recovery for the workloads which are primary on the unavailable managed cluster

    Applications that are previously failed over to a managed cluster do not report a lastGroupSyncTime, thereby causing the trigger of the alert VolumeSynchronizationDelay. This is because when the ACM hub and a managed cluster that are part of the DRPolicy are unavailable, a new ACM hub cluster is reconstructed from the backup.

    Workaround: If the managed cluster to which the workload was failed over is unavailable, you can still failover to a surviving managed cluster.

    (DFBUGS-376)

  • MCO operator reconciles the veleroNamespaceSecretKeyRef and CACertificates fields

    When the OpenShift Data Foundation operator is upgraded, the CACertificates and veleroNamespaceSecretKeyRef fields under s3StoreProfiles in the Ramen config are lost.

    Workaround: If the Ramen config has the custom values for the CACertificates and veleroNamespaceSecretKeyRef fields, then set those custom values after the upgrade is performed.

    (DFBUGS-440)

  • Instability of the token-exchange-agent pod after upgrade

    The token-exchange-agent pod on the managed cluster is unstable as the old deployment resources are not cleaned up properly. This might cause application failover action to fail.

    Workaround: Refer the knowledgebase article, "token-exchange-agent" pod on managed cluster is unstable after upgrade to ODF 4.17.0.

    Result: If the workaround is followed, "token-exchange-agent" pod is stabilized and failover action works as expected.

    (DFBUGS-561)

  • virtualmachines.kubevirt.io resource fails restore due to mac allocation failure on relocate

    When a virtual machine is relocated to the preferred cluster, it might fail to complete relocation due to unavailability of the mac address. This happens if the virtual machine is not fully cleaned up on the preferred cluster when it is failed over to the failover cluster.

    Ensure that the workload is completely removed from the preferred cluster before relocating the workload.

    (BZ#2295404)

  • Failover process fails when the ReplicationDestination resource has not been created yet

    If the user initiates a failover before the LastGroupSyncTime is updated, the failover process might fail. This failure is accompanied by an error message indicating that the ReplicationDestination does not exist.

    Workaround:

    Edit the ManifestWork for the VRG on the hub cluster.

    Delete the following section from the manifest:

    /spec/workload/manifests/0/spec/volsync
    Copy to Clipboard Toggle word wrap

    Save the changes.

    Applying this workaround correctly ensures that the VRG skips attempting to restore the PVC using the ReplicationDestination resource. If the PVC already exists, the application uses it as is. If the PVC does not exist, a new PVC is created.

    (DFBUGS-632)

  • Ceph in warning state after adding capacity to cluster

    After device replacement or add capacity procedure it is observed that Ceph is in HEALTH_WARN state with mon reporting slow ops. However, there is no impact to the usability of the cluster.

    (DFBUGS-1273)

  • OSD pods restart during add capacity

    OSD pods restarts after performing cluster expansion by adding capacity to the cluster. However, no impact to the cluster is observed apart from pod restarting.

    (DFBUGS-1426)

8.2. Multicloud Object Gateway

  • NooBaa Core cannot assume role with web identity due to a missing entry in the role’s trust policy

    For OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS), you need to add another entry in the trust policy for noobaa-core account. This is because with the release of OpenShift Data Foundation 4.17, the service account has changed from noobaa to noobaa-core.

    For instructions to add an entry in the trust policy for noobaa-core account, see the final bullet in the prerequisites section of Updating Red Hat OpenShift Data Foundation 4.16 to 4.17.

    (DFBUGS-172)

  • Upgrade to OpenShift Data Foundation 4.17 results in noobaa-db pod CrashLoopBackOff state

    Upgrading to OpenShift Data Foundation 4.17 from OpenShift Data Foundation 4.15 fails when the PostgreSQL upgrade fails in Multicloud Object Gateway which always start with PostgresSQL version 15. If there is a PostgreSQL upgrade failure, the NooBaa-db-pg-0 pod fails to start.

    Workaround: Refer to the knowledgebase article Recover NooBaa’s PostgreSQL upgrade failure in OpenShift Data Foundation 4.17.

    (DFBUGS-1751)

8.3. Ceph

  • Poor performance of the stretch clusters on CephFS

    Workloads with many small metadata operations might exhibit poor performance because of the arbitrary placement of metadata server (MDS) on multi-site Data Foundation clusters.

    (DFBUGS-1753)

  • SELinux relabelling issue with a very high number of files

    When attaching volumes to pods in Red Hat OpenShift Container Platform, the pods sometimes do not start or take an excessive amount of time to start. This behavior is generic and it is tied to how SELinux relabelling is handled by the Kubelet. This issue is observed with any filesystem based volumes having very high file counts. In OpenShift Data Foundation, the issue is seen when using CephFS based volumes with a very high number of files. There are different ways to workaround this issue. Depending on your business needs you can choose one of the workarounds from the knowledgebase solution https://access.redhat.com/solutions/6221251.

    (Jira#3327)

8.4. CSI Driver

  • Automatic flattening of snapshots is not working

    When there is a single common parent RBD PVC, if volume snapshot, restore, and delete snapshot are performed in a sequence more than 450 times, it is further not possible to take volume snapshot or clone of the common parent RBD PVC.

    To workaround this issue, instead of performing volume snapshot, restore, and delete snapshot in a sequence, you can use PVC to PVC clone to completely avoid this issue.

    If you hit this issue, contact customer support to perform manual flattening of the final restored PVCs to continue to take volume snapshot or clone of the common parent PVC again.

    (DFBUGS-1752)

8.5. OpenShift Data Foundation console

  • Optimize DRPC creation when multiple workloads are deployed in a single namespace

    When multiple applications refer to the same placement, then enabling DR for any of the applications enables it for all the applications that refer to the placement.

    If the applications are created after the creation of the DRPC, the PVC label selector in the DRPC might not match the labels of the newer applications.

    Workaround: In such cases, disabling DR and enabling it again with the right label selector is recommended.

    (DFBUGS-120)

8.6. OCS operator

  • Increasing MDS memory is erasing CPU values when pods are in CLBO state

    When the metadata server (MDS) memory is increased while the MDS pods are in a crash loop back off (CLBO) state, CPU request or limit for the MDS pods is removed. As a result, the CPU request or the limit that is set for the MDS changes.

    Workaround: Run the oc patch command to adjust the CPU limits.

    For example:

    $ oc patch -n openshift-storage storagecluster ocs-storagecluster \
        --type merge \
        --patch '{"spec": {"resources": {"mds": {"limits": {"cpu": "3"},
        "requests": {"cpu": "3"}}}}}'
    Copy to Clipboard Toggle word wrap

    (DFBUGS-426)

  • Error while reconciling: Service "ocs-provider-server" is invalid: spec.ports[0].nodePort: Invalid value: 31659: provided port is already allocated

    From OpenShift Data Foundation 4.18, the ocs-oeprator deploys a service with the port 31659, which might conflict with the existing service nodePort. Due to this any other service cannot use this port if it is already in use. As a result, ocs-oeprator will always error out while deploying the service. This causes the upgrade reconciliation to be stuck.

    Workaround: Replace nodePort to ClusterIP to avoid the collision:

    oc patch -nopenshift-storage storagecluster ocs-storagecluster --type merge -p '{"spec": {"providerAPIServerServiceType": "ClusterIP"}}'
    Copy to Clipboard Toggle word wrap

    (DFBUGS-1831)

  • prometheus-operator pod is missing toleration in Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) deployments

    Due to a known issue during Red Hat OpenShift Data Foundation on ROSA HCP deployment, toleration needs to be manually applied for prometheus-operator after pod creation. To apply the toleration, run the following patch command:

    $ oc patch csv odf-prometheus-operator.v4.18.0-rhodf -n odf-storage --type=json -p='[{"op": "add", "path": "/spec/install/spec/deployments/0/spec/template/spec/tolerations", "value": [
    
    {"key": "node.ocs.openshift.io/storage", "operator": "Equal", "value": "true", "effect": "NoSchedule" }
    ]}]'
    Copy to Clipboard Toggle word wrap

    (DFBUGS-1272)

8.7. ODF-CLI

  • ODF-CLI tools misidentify stale volumes

    Stale subvolume CLI tool misidentifies the valid CephFS persistent volume claim (PVC) as stale due to an issue in the stale subvolume identification tool. As a result, stale subvolume identification functionality will not be available till the issue is fixed.

    (DFBUGS-3778)

Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat