Chapter 6. Bug fixes
This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.12.
6.1. Disaster recovery
async
replication can no longer be set to0
Previously, you could enter any value for
Sync schedule
. This meant you could setasync
replication to0
, which caused an error. With this update, a number input has been introduced that does not allow a value lower than 1.async
replication now works correctly.
Deletion of Application now deletes pods and PVCs correctly
Previously, when deleting an application from the RHACM console, DRPC did not get deleted. Not deleting DRPC leads to not deleting the VRG as well as the VR. If the VRG/VR is not deleted, the PVC finalizer list will not be cleaned up, causing the PVC to stay in a
Terminating
state.With this update, deleting an application from the RHACM console deletes the required dependent DRPC and related resources on the managed clusters, freeing up the PVCs as well for required garbage collection.
Deleting the internal
VolumeReplicaitonGroup
resource from where a workload failed over or relocated from no longer causes errorsDue to a bug in the disaster recovery (DR) reconciler, during deletion of the internal
VolumeReplicaitonGroup
resource on a managed cluster, from where a workload failed over or relocated from, a persistent volume claim (PVC) was attempted to be protected. The resulting cleanup operation did not complete and would report thePeerReady
condition on theDRPlacementControl
for the application to beFalse
. This meant the application that was failed over or relocated, could not be relocated or failed over again because theDRPlacementControl
resource was reporting itsPeerReady
condition asFalse
.With this update, during deletion of the internal
VolumeReplicationGroup
resource, a PVC is not attempted to be protected again, thereby avoiding the issue of a stalled cleanup. This results inDRPlacementControl
reportingPeerReady
asTrue
post auto completion of the cleanup.
6.2. Multicloud Object Gateway
StorageCluster
no longer goes intoError
state while waiting forStorageClass
creationWhen an Red Hat OpenShift Data Foundation
StorageCluster
is created, it waits for the underlying pools to be created before theStorageClass
is created. During this time, the cluster returns an error for the reconcile request until the pools are ready. Because of this error, thePhase
of theStorageCluster
is set toError
. With this update, this error is caught during pool creation, and thePhase
of theStorageCluster
isProgressing
.
6.3. CephFS
There is no longer an issue with bucket metadata when updating from RHCS 5.1 to a later version
RADOS Gateway (RGW) as shipped with Red Hat Ceph Storage (RHCS) version 5.1 inadvertently contained logic related to not-yet-GA support for dynamic bucket-index resharding in multisite replication setups. This logic was intentionally removed from RHCS 5.2. A side effect of this history is that sites which have upgraded to RHCS 5.1 cannot upgrade to RHCS 5.2, since version 5.2’s bucket metadata handling is not compatible with that of RHCS 5.1. This situation is now resolved with the upgrade to RHCS 5.3. As a result, RHCS 5.3 is able to operate on buckets created in all prior versions, including 5.1.
6.4. OpenShift Data Foundation operator
There is no longer a Pod Security Violation Alert when the ODF operator is installed
OpenShift Data Foundation version 4.11 introduced new POD Security Admission standards which give warnings on running of privileged pods. The ODF operator deployment uses a few pods which needed privileged access. Because of this, after the ODF operator was deployed, a Pod Security Violation alert started firing.
With this release, OLM now automatically labels the Namespace, which is prefixed by
openshift-*
, for relevant Pod security Admission standards.