Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 7. Known issues


This section describes the known issues in Red Hat OpenShift Data Foundation 4.18.

7.1. Disaster recovery

  • Regional-DR is not supported in environments deployed on IBM Power

    Regional-DR is not supported in OpenShift Data Foundation environments deployed on IBM Power because ACM 2.15 is not supported on this platform for this release. This impacts both new and upgraded deployments on IBM Power.

    (DFBUGS-5369)

  • CIDR range does not persist in csiaddonsnode object when the respective node is down

    When a node is down, the Classless Inter-Domain Routing (CIDR) information disappears from the csiaddonsnode object. This impacts the fencing mechanism when it is required to fence the impacted nodes.

    Workaround: Collect the CIDR information immediately after the NetworkFenceClass object is created.

    (DFBUGS-2948)

  • DRPCs protect all persistent volume claims created on the same namespace

    The namespaces that host multiple disaster recovery (DR) protected workloads protect all the persistent volume claims (PVCs) within the namespace for each DRPlacementControl resource in the same namespace on the hub cluster that does not specify and isolate PVCs based on the workload using its spec.pvcSelector field.

    This results in PVCs that match the DRPlacementControl spec.pvcSelector across multiple workloads. Or, if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions.

    Workaround: Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line.

    Result: PVCs are no longer managed by multiple DRPlacementControl resources and do not cause any operation and data inconsistencies.

    (DFBUGS-1749)

  • Disabled PeerReady flag prevents changing the action to Failover

    The DR controller executes full reconciliation as and when needed. When a cluster becomes inaccessible, the DR controller performs a sanity check. If the workload is already relocated, this sanity check causes the PeerReady flag associated with the workload to be disabled, and the sanity check does not complete due to the cluster being offline. As a result, the disabled PeerReady flag prevents you from changing the action to Failover.

    Workaround: Use the command-line interface to change the DR action to Failover despite the disabled PeerReady flag.

    (DFBUGS-665)

  • Information about lastGroupSyncTime is lost after hub recovery for the workloads which are primary on the unavailable managed cluster

    Applications that are previously failed over to a managed cluster do not report a lastGroupSyncTime, thereby causing the trigger of the alert VolumeSynchronizationDelay. This is because when the ACM hub and a managed cluster that are part of the DRPolicy are unavailable, a new ACM hub cluster is reconstructed from the backup.

    Workaround: If the managed cluster to which the workload was failed over is unavailable, you can still failover to a surviving managed cluster.

    (DFBUGS-376)

  • MCO operator reconciles the veleroNamespaceSecretKeyRef and CACertificates fields

    When the OpenShift Data Foundation operator is upgraded, the CACertificates and veleroNamespaceSecretKeyRef fields under s3StoreProfiles in the Ramen config are lost.

    Workaround: If the Ramen config has the custom values for the CACertificates and veleroNamespaceSecretKeyRef fields, then set those custom values after the upgrade is performed.

    (DFBUGS-440)

  • For discovered apps with CephFS, sync stop after failover

    For CephFS-based workloads, synchronization of discovered applications may stop at some point after a failover or relocation. This can occur with a Permission Denied error reported in the ReplicationSource status.

    Workaround:

    • For Non-Discovered Applications

      • Delete the VolumeSnapshot:

        $ oc delete volumesnapshot -n <vrg-namespace> <volumesnapshot-name>
        Copy to Clipboard Toggle word wrap

        The snapshot name usually starts with the PVC name followed by a timestamp.

      • Delete the VolSync Job:

        $ oc delete job -n <vrg-namespace> <pvc-name>
        Copy to Clipboard Toggle word wrap

        The job name matches the PVC name.

    • For Discovered Applications

      Use the same steps as above, except <namespace> refers to the application workload namespace, not the VRG namespace.

    • For Workloads Using Consistency Groups

      • Delete the ReplicationGroupSource:

        $ oc delete replicationgroupsource -n <namespace> <name>
        Copy to Clipboard Toggle word wrap
      • Delete All VolSync Jobs in that Namespace:

        $ oc delete jobs --all -n <namespace>
        Copy to Clipboard Toggle word wrap

        In this case, <namespace> refers to the namespace of the workload (either discovered or not), and <name> refers to the name of the ReplicationGroupSource resource.

        (DFBUGS-2883)

  • Remove DR option is not available for discovered apps on the Virtual machines page

    The Remove DR option is not available for discovered applications listed on the Virtual machines page.

    Workaround:

    1. Add the missing label to the DRPlacementControl:

      {{oc label drplacementcontrol <drpcname> \
      odf.console.selector/resourcetype=virtualmachine \
      -n openshift-dr-ops}}
      Copy to Clipboard Toggle word wrap
    2. Add the PROTECTED_VMS recipe parameter with the virtual machine name as its value:

      {{oc patch drplacementcontrol <drpcname> \
      -n openshift-dr-ops \
      --type='merge' \
      -p '{"spec":{"kubeObjectProtection":{"recipeParameters":{"PROTECTED_VMS":["<vm-name>"]}}}}'}}
      Copy to Clipboard Toggle word wrap

      (DFBUGS-2823)

  • DR Status is not displayed for discovered apps on the Virtual machines page

    DR Status is not displayed for discovered applications listed on the Virtual machines page.

    Workaround:

    1. Add the missing label to the DRPlacementControl:

      {{oc label drplacementcontrol <drpcname> \
      odf.console.selector/resourcetype=virtualmachine \
      -n openshift-dr-ops}}
      Copy to Clipboard Toggle word wrap
    2. Add the PROTECTED_VMS recipe parameter with the virtual machine name as its value:

      {{oc patch drplacementcontrol <drpcname> \
      -n openshift-dr-ops \
      --type='merge' \
      -p '{"spec":{"kubeObjectProtection":{"recipeParameters":{"PROTECTED_VMS":["<vm-name>"]}}}}'}}
      Copy to Clipboard Toggle word wrap

      (DFBUGS-2822)

  • Secondary PVCs are not removed when DR protection is removed for discovered apps

    On the secondary cluster, CephFS PVCs linked to a workload are usually managed by the VolumeReplicationGroup (VRG). However, when a workload is discovered using the Discovered Applications feature, the associated CephFS PVCs are not marked as VRG-owned. As a result, when the workload is disabled, these PVCs are not automatically cleaned up and become orphaned.

    Workaround: To clean up the orphaned CephFS PVCs after disabling DR protection for a discovered workload, manually delete them using the following command:

    $ oc delete pvc <pvc-name> -n <pvc-namespace>
    Copy to Clipboard Toggle word wrap

    (DFBUGS-2827)

7.2. Multicloud Object Gateway

  • Unable to create new OBCs using Multicloud Object Gateway

    When provisioning an NSFS bucket via ObjectBucketClaim (OBC), the default filesystem path is expected to use the bucket name. However, if path is set in OBC.Spec.AdditionalConfig, it should take precedence. This behavior is currently inconsistent, resulting in failures when creating new OBCs.

    (DFBUGS-3817)

7.3. Ceph

  • Poor CephFS performance on stretch clusters

    Workloads with many small metadata operations might exhibit poor performance because of the arbitrary placement of metadata server pods (MDS) on multi-site Data Foundation clusters.

    (DFBUGS-1753)

  • OSD pods restart during add capacity

    OSD pods restart after performing cluster expansion by adding capacity to the cluster. However, no impact to the cluster is observed apart from pod restarting.

    (DFBUGS-1426)

  • Ceph becomes inaccessible and IO is paused when connection is lost between the two data centers in stretch cluster

    When two data centers lose connection with each other but are still connected to the Arbiter node, there is a flaw in the election logic that causes an infinite election among Ceph Monitors. As a result, the Monitors are unable to elect a leader and the Ceph cluster becomes unavailable. Also, IO is paused during the connection loss.

    Workaround: Shutdown the monitors of any one data zone by bringing down the zone nodes. Additionally, you can reset the connection scores of surviving Monitor pods.

    As a result, Monitors can form a quorum and Ceph becomes available again and IOs resumes.

    (DFBUGS-425)

  • SELinux relabelling issue with a very high number of files

    When attaching volumes to pods in Red Hat OpenShift Container Platform, the pods sometimes do not start or take an excessive amount of time to start. This behavior is generic and it is tied to how SELinux relabelling is handled by Kubelet. This issue is observed with any filesystem based volumes having very high file counts. In OpenShift Data Foundation, the issue is seen when using CephFS based volumes with a very high number of files. There are multiple ways to work around this issue. Depending on your business needs you can choose one of the workarounds from the knowledgebase solution https://access.redhat.com/solutions/6221251.

    (RFE-3327)

7.4. CSI driver

  • Sync stops after PVC deselection

    When a PersistentVolumeClaim (PVC) is added to or removed from a group by modifying its label to match or unmatch the group criteria, sync operations may unexpectedly stop. This occurs due to stale protected PVC entries remaining in the VolumeReplicationGroup (VRG) status.

    Workaround: Manually edit the VRG’s status field to remove the stale protected PVC:

    $ oc edit vrg <vrg-name> -n <vrg-namespace> --subresource=status
    Copy to Clipboard Toggle word wrap

    (DFBUGS-4012)

7.5. OpenShift Data Foundation console

  • UI shows WaitOnUserCleanUp even when automatic cleanup is enabled

    The UI incorrectly displays the WaitOnUserCleanUp status even when automatic cleanup is enabled for VMs. This occurs because the UI relies only on the phase and progression fields of the DRPlacementControl to determine cleanup behavior and does not evaluate the more granular AutoCleanup condition that explicitly indicates automatic cleanup.

    Workaround: There is no manual workaround required. This state is transient and clears automatically once the progression field advances to Completed. Manual cleanup should be avoided unless the AutoCleanup condition and its corresponding reason in the DRPlacementControl or VRG status indicate otherwise.

    During automatic cleanup, the UI may briefly present a misleading status, which can cause temporary confusion until the cleanup completes.

    (DFBUGS-5824)

  • DRPlacementControl shows ProtectionError even after successful relocation

    When a relocation completes, the DRPlacementControl may continue to display a ProtectionError status. This occurs because the Protected condition in the DRPlacementControl status incorrectly reports an Error state, even though the relocation has finished (phase: Relocated, progression: Completed).

    Workaround: No direct workaround is available. Wait until retrying the NoClusterDataConflict condition is met.

    The DR status in the UI remains in the ProtectionError state until the data conflict is resolved.

    (DFBUGS-5823)

  • UI shows "Unauthorized" error and Blank screen with loading temporarily during OpenShift Data Foundation operator installation

    During OpenShift Data Foundation operator installation, sometimes the InstallPlan transiently goes missing which causes the page to show unknown status. This does not happen regularly. As a result, the messages and title go missing for a few seconds.

    (DFBUGS-3574)

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben