Chapter 3. Enhancements
This section describes the major enhancements introduced in Red Hat OpenShift Data foundation 4.17.
3.1. Capacity consumption trend card
Consumption trend card in OpenShift Data Foundation dashboard provides information about the estimated days until the storage is full and this helps in new hardware procurement. Storage consumption rate is calculated based on the actual capacity utilization, historical usage, and current consumption rate and is displayed in GiB per day. The number of days left for the storage to reach the threshold capacity is also shown.
For more information, see Metrics in the Block and File dashboard.
3.2. Bucket logging and log based replication optimization for Multicloud Object Gateway buckets
Supports replication of large amounts of data between Multicloud Object Gateway (MCG) and Amazon Web Services (AWS) or MCG and MCG. The support for log-based replication for AWS S3 using bucket logging is extended to MCG bucket for optimization. The log-based replication optimization also supports object prefix filtering.
For more information, see Bucket logging for Multicloud Object.
3.3. Capacity alert thresholds increased
The thresholds of the PersistentVolumeUsageCritical and PersistentVolumeUsageNearFull alerts are increased so that they are triggered only when the space is limited.
Previously, these PersistentVolumeUsageCritical and PersistentVolumeUsageNearFull alerts that were triggered even when there was still plenty of space available caused unnecessary worry about the state of the cluster.
3.4. Ceph full thresholds configurations
The Ceph OSD full thresholds can be set using the ODF CLI tool or by updating the StorageCluster CR.
For more information, see Setting Ceph OSD full thresholds by updating the StorageCluster CR.
3.5. Preserve the CLI flags that were passed during creation of external clusters
The command-line interface (CLI) flags passed during creation can be preserved during upgrade automatically. Passing the new flags during upgrade helps to use additional features.
For more information, see Creating an OpenShift Data Foundation Cluster for external Ceph storage system.
3.6. MDS scalability
Multiple active metadata services (MDS) can be run when MDS CPU usage becomes too high.
For more information, see the Troubleshooting guide.
3.7. Traditional user experience for must gather utility
In previous versions, running must-gather
required an < -arg>
flag to specify which logs to collect. With this update, must-gather
no longer requires a < -arg>
flag, and by default collects all logs.
3.8. CephCSI pod log rotation
Log rotation of CephCSI pods is enabled with csi-operator
so client operator can use it.
3.9. Multiple filesystems
Creating multiple filesystems on the same cluster node for hybrid cluster or any other use case is supported.
3.10. topologySpreadConstraints
added to PV backingstore pods so they get scheduled on a spread basis
Previously, when a PV backingstore was created, the backingstore pod was scheduled randomly on any node. This was not ideal because if any node went down, all the backingstore pods on that node would go down as well. To increase high availability, a topology spread constraint is added to the backingstore pod so that they get scheduled on a spread basis.