Este contenido no está disponible en el idioma seleccionado.

Chapter 6. Known issues


This section describes the known issues in Red Hat OpenShift Data Foundation 4.17.

6.1. Disaster recovery

  • ceph df reports an invalid MAX AVAIL value when the cluster is in stretch mode

    When a crush rule for a Red Hat Ceph Storage cluster has multiple "take" steps, the ceph df report shows the wrong maximum available size for the map. The issue will be fixed in an upcoming release.

    (BZ#2100920)

  • Both the DRPCs protect all the persistent volume claims created on the same namespace

    The namespaces that host multiple disaster recovery (DR) protected workloads, protect all the persistent volume claims (PVCs) within the namespace for each DRPlacementControl resource in the same namespace on the hub cluster that does not specify and isolate PVCs based on the workload using its spec.pvcSelector field.

    This results in PVCs that match the DRPlacementControl spec.pvcSelector across multiple workloads. Or, if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions.

    Workaround: Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line.

    Result: PVCs are no longer managed by multiple DRPlacementControl resources and do not cause any operation and data inconsistencies.

    (BZ#2128860)

  • MongoDB pod is in CrashLoopBackoff because of permission errors reading data in cephrbd volume

    The OpenShift projects across different managed clusters have different security context constraints (SCC), which specifically differ in the specified UID range and/or FSGroups. This leads to certain workload pods and containers failing to start post failover or relocate operations within these projects, due to filesystem access errors in their logs.

    Workaround: Ensure workload projects are created on all managed clusters with the same project-level SCC labels, allowing them to use the same filesystem context when failed over or relocated. Pods will no longer fail post-DR actions on filesystem-related access errors.

    (BZ#2081855)

  • Disaster recovery workloads remain stuck when deleted

    When deleting a workload from a cluster, the corresponding pods might not terminate with events such as FailedKillPod. This might cause delay or failure in garbage collecting dependent DR resources such as the PVC, VolumeReplication, and VolumeReplicationGroup. It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected.

    Workaround: Reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected.

    (BZ#2159791)

  • Regional DR CephFS based application failover show warning about subscription

    After the application is failed over or relocated, the hub subscriptions show up errors stating, "Some resources failed to deploy. Use View status YAML link to view the details." This is because the application persistent volume claims (PVCs) that use CephFS as the backing storage provisioner, deployed using Red Hat Advanced Cluster Management for Kubernetes (RHACM) subscriptions, and are DR protected are owned by the respective DR controllers.

    Workaround: There are no workarounds to rectify the errors in the subscription status. However, the subscription resources that failed to deploy can be checked to make sure they are PVCs. This ensures that the other resources do not have problems. If the only resources in the subscription that fail to deploy are the ones that are DR protected, the error can be ignored.

    (BZ-2264445)

  • Disabled PeerReady flag prevents changing the action to Failover

    The DR controller executes full reconciliation as and when needed. When a cluster becomes inaccessible, the DR controller performs a sanity check. If the workload is already relocated, this sanity check causes the PeerReady flag associated with the workload to be disabled, and the sanity check does not complete due to the cluster being offline. As a result, the disabled PeerReady flag prevents you from changing the action to Failover.

    Workaround: Use the command-line interface to change the DR action to Failover despite the disabled PeerReady flag.

    (BZ-2264765)

  • Ceph becomes inaccessible and IO is paused when connection is lost between the two data centers in stretch cluster

    When two data centers lose connection with each other but are still connected to the Arbiter node, there is a flaw in the election logic that causes an infinite election between the monitors. As a result, the monitors are unable to elect a leader and the Ceph cluster becomes unavailable. Also, IO is paused during the connection loss.

    Workaround: Shutdown the monitors of any one of the data zone by bringing down the zone nodes. Additionally, you can reset the connection scores of surviving mon pods.

    As a result, monitors can form a quorum and Ceph becomes available again and IOs resume.

    (Partner BZ#2265992)

  • RBD applications fail to Relocate when using stale Ceph pool IDs from replacement cluster

    For the applications created before the new peer cluster is created, it is not possible to mount the RBD PVC because when a peer cluster is replaced, it is not possible to update the CephBlockPoolID’s mapping in the CSI configmap.

    Workaround: Update the rook-ceph-csi-mapping-config configmap with cephBlockPoolID’s mapping on the peer cluster that is not replaced. This enables mounting the RBD PVC for the application.

    (BZ#2267731)

  • Information about lastGroupSyncTime is lost after hub recovery for the workloads which are primary on the unavailable managed cluster

    Applications that are previously failed over to a managed cluster do not report a lastGroupSyncTime, thereby causing the trigger of the alert VolumeSynchronizationDelay. This is because when the ACM hub and a managed cluster that are part of the DRPolicy are unavailable, a new ACM hub cluster is reconstructed from the backup.

    Workaround: If the managed cluster to which the workload was failed over is unavailable, you can still failover to a surviving managed cluster.

    (BZ#2275320)

  • MCO operator reconciles the veleroNamespaceSecretKeyRef and CACertificates fields

    When the OpenShift Data Foundation operator is upgraded, the CACertificates and veleroNamespaceSecretKeyRef fields under s3StoreProfiles in the Ramen config are lost.

    Workaround: If the Ramen config has the custom values for the CACertificates and veleroNamespaceSecretKeyRef fields, then set those custom values after the upgrade is performed.

    (BZ#2277941)

  • Instability of the token-exchange-agent pod after upgrade

    The token-exchange-agent pod on the managed cluster is unstable as the old deployment resources are not cleaned up properly. This might cause application failover action to fail.

    Workaround: Refer the knowledgebase article, "token-exchange-agent" pod on managed cluster is unstable after upgrade to ODF 4.17.0.

    Result: If the workaround is followed, "token-exchange-agent" pod is stabilized and failover action works as expected.

    (BZ#2293611)

  • virtualmachines.kubevirt.io resource fails restore due to mac allocation failure on relocate

    When a virtual machine is relocated to the preferred cluster, it might fail to complete relocation due to unavailability of the mac address. This happens if the virtual machine is not fully cleaned up on the preferred cluster when it is failed over to the failover cluster.

    Ensure that the workload is completely removed from the preferred cluster before relocating the workload.

    (BZ#2295404)

  • Relocating of CephFS gets stuck in WaitForReadiness

    There is a scenario where the DRPC progression gets stuck in WaitForReadiness. If it remains in this state for an extended period, it’s possible that a known issue has occurred, preventing Ramen from updating the PlacementDecision with the new Primary.

    As a result, the relocation process will not complete, leaving the workload undeployed on the new primary cluster. This can cause delays in recovery until the user intervenes.

    Workaround: Manually update the PlacementDecision to point to the new Primary.

  • For workload using PlacementRule:

    1. Edit the PlacementRule:

      $ oc edit placementrule --subresource=status -n [namespace] [name of the placementrule]7

      For example:

      $ oc edit placementrule --subresource=status -n busybox-workloads-cephfs-2  busybox-placement
    2. Add the following to the placementrule status:

      status:
        decisions:
        - clusterName: [primary cluster name]
          reason: [primary cluster name]
  • For workload using Placement:

    1. Edit the PlacementRule:

      $ oc edit placementdecision --subresource=status -n [namespace] [name of the placementdecision]

      For example:

      $ oc get placementdecision --subresource=status -n openshift-gitops busybox-3-placement-cephfs-decision-1
    2. Add the following to the placementrule status:

      status:
        decisions:
        - clusterName: [primary cluster name]
          reason: [primary cluster name]

      As a result, the PlacementDecision is updated and the workload is deployed on the Primary cluster.

      (BZ#2319334)

  • Failover process fails when the ReplicationDestination resource has not been created yet

    If the user initiates a failover before the LastGroupSyncTime is updated, the failover process might fail. This failure is accompanied by an error message indicating that the ReplicationDestination does not exist.

    Workaround:

    Edit the ManifestWork for the VRG on the hub cluster.

    Delete the following section from the manifest:

    /spec/workload/manifests/0/spec/volsync

    Save the changes.

    Applying this workaround correctly ensures that the VRG skips attempting to restore the PVC using the ReplicationDestination resource. If the PVC already exists, the application uses it as is. If the PVC does not exist, a new PVC is created.

    (BZ#2283038)

6.2. Multicloud Object Gateway

  • NooBaa Core cannot assume role with web identity due to a missing entry in the role’s trust policy

    For OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS), you need to add another entry in the trust policy for noobaa-core account. This is because with the release of OpenShift Data Foundation 4.17, the service account has changed from noobaa to noobaa-core.

    For instructions to add an entry in the trust policy for noobaa-core account, see the final bullet in the prerequisites section of Updating Red Hat OpenShift Data Foundation 4.16 to 4.17.

    (BZ#2322124)

  • Multicloud Object Gateway instance fails to finish initialization

    Due to a race in timing between the pod code run and OpenShift loading the Certificate Authority (CA) bundle into the pod, the pod is unable to communicate with the cloud storage service. As a result, the default backing store cannot be created.

    Workaround: Restart the Multicloud Object Gateway (MCG) operator pod:

    $ oc delete pod noobaa-operator-<ID>

    With the workaround the backing store is reconciled and works.

    (BZ#2271580)

  • Upgrade to OpenShift Data Foundation 4.17 results in noobaa-db pod CrashLoopBackOff state

    Upgrading to OpenShift Data Foundation 4.17 from OpenShift Data Foundation 4.15 fails when the PostgreSQL upgrade fails in Multicloud Object Gateway which always start with PostgresSQL version 15. If there is a PostgreSQL upgrade failure, the NooBaa-db-pg-0 pod fails to start.

    Workaround: Refer to the knowledgebase article Recover NooBaa’s PostgreSQL upgrade failure in OpenShift Data Foundation 4.17.

    (BZ#2298152)

6.3. Ceph

  • Poor performance of the stretch clusters on CephFS

    Workloads with many small metadata operations might exhibit poor performance because of the arbitrary placement of metadata server (MDS) on multi-site Data Foundation clusters.

    (BZ#1982116)

  • SELinux relabelling issue with a very high number of files

    When attaching volumes to pods in Red Hat OpenShift Container Platform, the pods sometimes do not start or take an excessive amount of time to start. This behavior is generic and it is tied to how SELinux relabelling is handled by the Kubelet. This issue is observed with any filesystem based volumes having very high file counts. In OpenShift Data Foundation, the issue is seen when using CephFS based volumes with a very high number of files. There are different ways to workaround this issue. Depending on your business needs you can choose one of the workarounds from the knowledgebase solution https://access.redhat.com/solutions/6221251.

    (Jira#3327)

6.4. CSI Driver

  • Automatic flattening of snapshots is not working

    When there is a single common parent RBD PVC, if volume snapshot, restore, and delete snapshot are performed in a sequence more than 450 times, it is further not possible to take volume snapshot or clone of the common parent RBD PVC.

    To workaround this issue, instead of performing volume snapshot, restore, and delete snapshot in a sequence, you can use PVC to PVC clone to completely avoid this issue.

    If you hit this issue, contact customer support to perform manual flattening of the final restored PVCs to continue to take volume snapshot or clone of the common parent PVC again.

    (BZ#2232163)

6.5. OpenShift Data Foundation console

  • Optimize DRPC creation when multiple workloads are deployed in a single namespace

    When multiple applications refer to the same placement, then enabling DR for any of the applications enables it for all the applications that refer to the placement.

    If the applications are created after the creation of the DRPC, the PVC label selector in the DRPC might not match the labels of the newer applications.

    Workaround: In such cases, disabling DR and enabling it again with the right label selector is recommended.

    (BZ#2294704)

6.6. OCS operator

  • Incorrect unit for the ceph_mds_mem_rss metric in the graph

    When you search for the ceph_mds_mem_rss metrics in the OpenShift user interface (UI), the graphs show the y-axis in Megabytes (MB), as Ceph returns ceph_mds_mem_rss metric in Kilobytes (KB). This can cause confusion while comparing the results for the MDSCacheUsageHigh alert.

    Workaround: Use ceph_mds_mem_rss * 1000 while searching this metric in the OpenShift UI to see the y-axis of the graph in GB. This makes it easier to compare the results shown in the MDSCacheUsageHigh alert.

    (BZ#2261881)

  • Increasing MDS memory is erasing CPU values when pods are in CLBO state

    When the metadata server (MDS) memory is increased while the MDS pods are in a crash loop back off (CLBO) state, CPU request or limit for the MDS pods is removed. As a result, the CPU request or the limit that is set for the MDS changes.

    Workaround: Run the oc patch command to adjust the CPU limits.

    For example:

    $ oc patch -n openshift-storage storagecluster ocs-storagecluster \
        --type merge \
        --patch '{"spec": {"resources": {"mds": {"limits": {"cpu": "3"},
        "requests": {"cpu": "3"}}}}}'

    (BZ#2265563)

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.