Chapter 6. Bug fixes
This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.12.
6.1. Disaster recovery Copy linkLink copied to clipboard!
asyncreplication can no longer be set to0Previously, you could enter any value for
Sync schedule. This meant you could setasyncreplication to0, which caused an error. With this update, a number input has been introduced that does not allow a value lower than 1.asyncreplication now works correctly.
Deletion of Application now deletes pods and PVCs correctly
Previously, when deleting an application from the RHACM console, DRPC did not get deleted. Not deleting DRPC leads to not deleting the VRG as well as the VR. If the VRG/VR is not deleted, the PVC finalizer list will not be cleaned up, causing the PVC to stay in a
Terminatingstate.With this update, deleting an application from the RHACM console deletes the required dependent DRPC and related resources on the managed clusters, freeing up the PVCs as well for required garbage collection.
Deleting the internal
VolumeReplicaitonGroupresource from where a workload failed over or relocated from no longer causes errorsDue to a bug in the disaster recovery (DR) reconciler, during deletion of the internal
VolumeReplicaitonGroupresource on a managed cluster, from where a workload failed over or relocated from, a persistent volume claim (PVC) was attempted to be protected. The resulting cleanup operation did not complete and would report thePeerReadycondition on theDRPlacementControlfor the application to beFalse. This meant the application that was failed over or relocated, could not be relocated or failed over again because theDRPlacementControlresource was reporting itsPeerReadycondition asFalse.With this update, during deletion of the internal
VolumeReplicationGroupresource, a PVC is not attempted to be protected again, thereby avoiding the issue of a stalled cleanup. This results inDRPlacementControlreportingPeerReadyasTruepost auto completion of the cleanup.
6.2. Multicloud Object Gateway Copy linkLink copied to clipboard!
StorageClusterno longer goes intoErrorstate while waiting forStorageClasscreationWhen an Red Hat OpenShift Data Foundation
StorageClusteris created, it waits for the underlying pools to be created before theStorageClassis created. During this time, the cluster returns an error for the reconcile request until the pools are ready. Because of this error, thePhaseof theStorageClusteris set toError. With this update, this error is caught during pool creation, and thePhaseof theStorageClusterisProgressing.
6.3. CephFS Copy linkLink copied to clipboard!
There is no longer an issue with bucket metadata when updating from RHCS 5.1 to a later version
RADOS Gateway (RGW) as shipped with Red Hat Ceph Storage (RHCS) version 5.1 inadvertently contained logic related to not-yet-GA support for dynamic bucket-index resharding in multisite replication setups. This logic was intentionally removed from RHCS 5.2. A side effect of this history is that sites which have upgraded to RHCS 5.1 cannot upgrade to RHCS 5.2, since version 5.2’s bucket metadata handling is not compatible with that of RHCS 5.1. This situation is now resolved with the upgrade to RHCS 5.3. As a result, RHCS 5.3 is able to operate on buckets created in all prior versions, including 5.1.
6.4. OpenShift Data Foundation operator Copy linkLink copied to clipboard!
There is no longer a Pod Security Violation Alert when the ODF operator is installed
OpenShift Data Foundation version 4.11 introduced new POD Security Admission standards which give warnings on running of privileged pods. The ODF operator deployment uses a few pods which needed privileged access. Because of this, after the ODF operator was deployed, a Pod Security Violation alert started firing.
With this release, OLM now automatically labels the Namespace, which is prefixed by
openshift-*, for relevant Pod security Admission standards.