Search

Chapter 4. Known issues

download PDF

This section describes issues you may encounter while installing, upgrading, or using Red Hat OpenShift Container Storage. Instructions for working around these issues are provided where possible.

Table 4.1. List of known issues
BugDescription

BZ#1769322

In AWS environment, after a node reboot, the *-mon-* pods are stuck in the init state for an extended period. Should this occur, contact Red Hat support.

BZ#1760426

It is not possible to uninstall Red Hat OpenShift Container Storage from the user interface.

See Uninstalling Openshift Container Storage for instructions on uninstall.

BZ#1743643

Persistent Volume Claim (PVC) expansion is not functional.

BZ#1783961

noobaa-db does not migrate to other nodes when a node goes down. NooBaa will not work when a node is down as migration of noobaa-db pod is blocked.

BZ#1788126

PodDisruptionBudget alert, which is an OpenShift Container Platform alert, is continuously shown for object storage devices (OSDs).

You can ignore this alert. Also, you can silence this alert by following the instructions in Managing cluster alerts section of the Openshift Container Platform documentation. For instructions on how to do so, see the Managing cluster alerts sections of the Red Hat Openshift Container Platform documentation.

For more information, refer to the Red Hat Knowledgebase article.

BZ#1836299

The autoscaling feature for the pod is not available in Red Hat OpenShift Container Storage, therefore the MAX HPA value can not be greater than 1.

You can ignore these alerts. The Red Hat OpenShift Container Platform allows silencing the alerts to separate them from the list of active alerts. For instructions on how to do so, see the Managing cluster alerts sections of the Red Hat Openshift Container Platform documentation.

BZ#1842456

After node replacement, the Ceph CRUSH map tree still contains the stale hostname entry of the removed node in the particular rack. While replacing a node in a different rack, if any node with same old hostname is added back to the cluster, it receives a new rack label from the ocs-operator, but is inserted into its old place in the CRUSH map, resulting in an indefinite Ceph HEALTH_WARN state.

As a workaround, we recommend to use a new hostname for adding the replaced node back into the cluster.

 

If your cluster was deployed over Local Storage Operator in Openshift Container Storage version 4.3, you must re-install the cluster and not upgrade to version 4.4.

For details on installation, see Deploying OpenShift Container Storage using local storage devices.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.