Chapter 8. Known issues


This section describes known issues in Red Hat OpenShift Data Foundation 4.10.

8.1. ODF-DR

Creating application namespace for managed clusters

Application namespace needs to exist on managed clusters for Disaster Recovery(DR) related pre-deployment actions and hence is pre-created when an application is deployed at the ACM hub cluster. However, if an application is deleted at the ACM hub cluster and its corresponding namespace is deleted on the managed clusters, they reappear on the managed cluster.

Workaround: openshift-dr maintains a namespace manifestwork resource in the managed cluster namespace at the ACM hub, these resources need to be deleted post the application deletion. For example, as cluster administrator execute the following command on the ACM hub cluster, “oc delete manifestwork -n <managedCluster namespace> <drPlacementControl name>-<namespace>-ns-mw”.

(BZ#2059669)

8.2. Management Console

OpenShift console disables plugin and all its extension when the network connection is lost

When the user is accessing the Data Foundation dashboard for the first time and in between if the network connectivity is lost then the plugin and extensions of OpenShift Container Platform console are also deactivated for that instance. This happens because an error occurs due to network disruption between the browser and the cluster while resolving any of the required modules.

Workaround: Ensure to have stable network connectivity between the browser and the cluster, refresh the page and make sure that everything is working smoothly.

(BZ#2072965)

Standalone Multicloud Object Gateway deployment with external Key Management Service fails

The standalone Multicloud Object Gateway(MCG) deployment using an external Key Management Service(KMS) fails due to a crash in the user interface.

Workaround: There is currently no workaround for this issue, and a fix is expected in one of the upcoming releases.

(BZ#2074810)

8.3. Rook

IBM FlashSystem is not supported with ODF 4.10 due to a failure of Rook-Ceph to run OSDs

Rook-Ceph prepares job fails that result in no OSDs running due to the presence of the environment variable starting with "IBM_".

Workaround: Currently, there is no workaround for this issue, and a fix is expected in one of the upcoming releases of Red Hat OpenShift Data Foundation.

(BZ#2073920)

8.4. ODF Operator

StorageCluster and StorageSystem ocs-storagecluster are in error state for a few minutes when installing StorageSystem

During StorageCluster creation, there is a small window of time where it will appear in an error state before moving on to a successful/ready state. This is an intermittent but expected behavior, and will usually resolve itself.

Workaround: Wait and watch status messages or logs for more information.

(BZ#2004027)

8.5. Ceph

Poor performance of stretch clusters on CephFS

Workloads with many small metadata operations might exhibit poor performance because of the arbitrary placement of metadata server (MDS) on multi-site OpenShift Data Foundation clusters.

(BZ#1982116)

SELinux relabelling issue with a very high number of files

When attaching volumes to pods in Red Hat OpenShift Container Platform, the pods sometimes do not start or take an excessive amount of time to start. This behavior is generic and it is tied to how SELinux relabelling is handled by the Kubelet. This issue is observed with any filesystem based volumes having a very high file counts. In OpenShift Data Foundation, the issue is seen when using CephFS based volumes with a very high number of files. There are different ways to workaround this issue. Depending on your business needs you can choose one of the workarounds from the knowledgebase solution https://access.redhat.com/solutions/6221251.

(Jira#3327)

Failover action reports RADOS block device image mount failed on the pod with RPC error still in use

Failing over a disaster recovery (DR) protected workload may result in pods using the volume on the failover cluster to be stuck in reporting RADOS block device (RBD) image is still in use. This prevents the pods from starting up for a long duration (upto several hours).

(BZ#2007376)

Failover action reports RADOS block device image mount failed on the pod with RPC error fsck

Failing over a disaster recovery (DR) protected workload may result in pods not starting with volume mount errors that state the volume has file system consistency check (fsck) errors. This prevents the workload from failing over to the failover cluster.

(BZ#2021460)

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.