Chapter 2. New Features
This section describes new features introduced in Red Hat OpenShift Data Foundation 4.15.
2.1. Support for multiple storage clusters
Red Hat OpenShift Data Foundation provides the ability to deploy two storage clusters, one in internal mode and the other in external mode. The first cluster must be installed in internal mode in the openshift-storage
namespace and the second cluster in external mode in the openshift-storage-extended
namespace. Vice-versa is currently not supported.
For more information, see the Deploying multiple OpenShift Data Foundation storage clusters.
2.2. Non resilient storage class
OpenShift Data Foundation allows the addition and use of a new non resilient replica-1 storage class. This helps to avoid redundant data copies and enables resilient management at the application level.
For more information, see the deployment guide for your platform in the Storage class with single replica.
2.3. Recovering to a replacement cluster for Metro-DR
When a primary or a secondary cluster of Metro-DR fails, the cluster can be either repaired or wait for the recovery of the existing cluster, or replace the cluster entirely if the cluster is irredeemable. OpenShift Data Foundation provides the ability to replace a failed primary or a secondary cluster with a new cluster and enable failover (relocate) to the new cluster.
For more information, see Recovering to a replacement cluster.
2.4. OpenShift virtualization workloads for Metro-DR
Metropolitan disaster recovery (Metro-DR) solution can be easily set up for OpenShift Virtualization workloads using OpenShift Data Foundation.
For more information, see the knowledgebase article, Use ODF Metro DR to protect ACM applications containing Virtual Machines in OpenShift.
2.5. Support setting of RBD storage class as the default
The Ceph RADOS block device (RBD) storage class can be set as the default storage class during the deployment of OpenShift Data Foundation on bare metal and IBM Power platforms. This helps to avoid manual annotation of the storage cluster when it is required to set Ceph RBD as the default storage class. In addition, it helps to avoid the confusion of selecting the correct storage class.
For more information, see Creating OpenShift Data Foundation cluster on bare metal and Creating OpenShift Data Foundation cluster on IBM Power.
2.6. Performance profiles
OpenShift Data Foundation provides an option to choose a resource profile based on the availability of resources during deployment. The performance profile helps to obtain enhanced performance levels. The following performance profiles can be configured both during deployment and post deployment:
- Lean - To be used in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory.
- Balanced - To be used when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads.
- Performance - To be used in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads.
For more information, see the deployment guide for your platform in the OpenShift Data Foundation documentation.
2.7. Ability to create backing stores in OpenShift cluster that use AWS Security Token Service
OpenShift Data Foundation can be deployed on an OpenShift cluster that has the Amazon Web Services security token service (AWS STS) enabled and then backing stores of type aws-sts-s3
can be created using the Multicloud Object Gateway command-line interface.
For more information, see Creating an AWS-STS-backed backingstore.
2.8. Runbooks for OpenShift Data Foundation alerts
OpenShift Data Foundation alerts include runbooks that provide guidance to fix problems on clusters that are surfaced by alerts. Alerts displayed in OpenShift Data Fondation have links to the corresponding runbooks.
2.9. Allow expansion of encrypted RBD volumes
With this release, the expansion of the encrypted RADOS block device (RBD) volume feature is generally available. This feature provides resize capability for encrypted RBD persistent volume claims (PVCs).
For more information, see the knowledgebase article Enabling resize for encrypted RBD PVC.
2.10. Improved cluster availability with additional monitor daemon components
OpenShift Data Foundation provides the ability to configure up to five Ceph monitor daemon components in an internal mode deployment based on the number of racks or zones when there are three, five, or more number of failure domains present in the deployment. Ceph monitor count can be increased to improve the availability of the cluster.
For more information, see Resolving low Ceph monitor count alert.
2.11. Alerts for monitoring system overload
OpenShift Data Foundation 4.15 introduces three new alerts to monitor the system that is getting overloaded. The new alerts are OSDCPULoadHigh
, MDSCPUUsageHigh
, and MDSCacheUsageHigh
. These alerts improve the visibility to the current system performance and suggest tuning it when needed.
For more information, see Resolving cluster alerts.
2.12. Shallow volumes support for snapshot or clone
With this release, PVC creation from snapshot functionality in OpenShift Data Foundation supports shallow volumes. These shallow volumes act as a reference to the source subvolume snapshot with no actual new subvolume being created in CephFS. The supported access mode for the shallow volume is ReadOnlyMany
. When such PVCs are mounted, it means that the respective CephFS subvolume snapshot is exposed to the workloads. These shallow volumes help to reduce the time and resources to create clones.
It is not possible to take a snapshot of the ROX PVC and creating a ROX PVC clone from ROX PVC results in a pending state. This is an expected behavior.
2.13. Support for Logical Partition (LPAR) deployment
OpenShift Data Foundation on IBM Z supports Logical Partition (LPAR) as one of the additional deployment methods.