Chapter 2. Notable Bug Fixes
This chapter describes bugs fixed in this release of Container-Native Storage for OpenShift Container Platform that have significant impact on users.
This release addresses many customers reported issues related to Heketi and Gluster database going out of synchronization leaving the system with stale metadata about a number of PV’s, old deleted PV’s, and failed transaction metadata. Container-Native Storage 3.9 handles Heketi and underlying Gluster subsystem more robustly.
This release also addresses some of the issues reported by customers on Block backed PV's specifically under failure scenario and after recovery from such failures.
heketi
- BZ#1415750
- Previously, deleting an heketi pod while some heketi operation was in progress would result in incomplete entries in the database. With this fix, such entries are marked "pending" until the operation is completed, thus leading to a consistent database view.
- BZ#1434668
- Earlier, the 'device info' output displayed the state of the device as 'failed' after a device remove operation was completed. With this fix, the state of the device is changed to 'removed' which matches with the operation performed.
- BZ#1437798
- Earlier, it was possible to run multiple device remove operations in parallel on the same device. This led to race conditions and database inconsistencies. With this fix, an error is returned while another device remove operation on the same device is already in progress.
kubernetes
- BZ#1505290
- Previously, the gluster-block provisioner did not identify the storage units correctly in the PVC. For example, it would identify 1 as 1GiB by default and the provisioner would fail on 1Gi. With this enhancement, gluster-block provisioner identifies the storage units correctly, ie, 1 will be treated as 1 byte, 1Gi will be treated as 1 GibiByte, and 1Ki will be treated as 1KibiByte.