Chapter 2. New features
This section describes new features introduced in Red Hat OpenShift Data Foundation 4.19.
2.1. Disaster recovery solution Copy linkLink copied to clipboard!
2.1.1. Multi volume consistency for Disaster Recovery Copy linkLink copied to clipboard!
Red Hat OpenShift Data Foundation Disaster Recovery (DR) provides crash consistent multi-volume consistency groups for Regional-DR to be used by applications that are deployed over multiple volumes. This is especially important for VirtualMachines that sometimes have multiple disks attached to it. For more information, see Multi-volume consistency for disaster recovery.
2.1.2. Replication delay for RHACM applications Copy linkLink copied to clipboard!
Health status for the Red Hat Advanced Cluster Management (RHACM) managed applications is displayed on the application list page which helps to monitor the health status of the disaster recovery. For more information, see Viewing health status of ApplicationSet-based and Subscription-based applications.
2.1.3. Additional disaster recovery recipe capabilities for CephFS-based applications Copy linkLink copied to clipboard!
The capabilities of DR recipes are enhanced to support more applications to provide automated disaster recovery for CephFS based applications that are deployed with imperative models.
2.1.4. Multiple storage classes in RHACM managed clusters for Regional Disaster Recovery operations Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management (RHACM) managed clusters allow replication of data that is using non-default storage classes that are managed by OpenShift Data Foundation. This is important for customers leveraging replica 2 storage classes. This is available for Regional DR using Ceph RBD block volumes.
2.2. Multicloud Object Gateway Copy linkLink copied to clipboard!
2.2.1. High availability option for Multicloud Object Gateway metadata database Copy linkLink copied to clipboard!
Starting with this release, Multicloud Object Gateway (MCG) runs with high availability for metadata databases (DB). This helps to avoid a single point of failure for MCG DB, which puts data at risk in case of a node failure.
2.2.2. Cross-origin resource sharing support for Multicloud object Gateway buckets Copy linkLink copied to clipboard!
Cross-origin resource sharing (CORS) is supported for Multicloud object Gateway buckets for increased coverage and compatibility with AWS::S3. CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.
For more information, see Creating Cross Origin Resource Sharing (CORS) rule and Editing Cross Origin Resource Sharing (CORS) rule.
2.2.3. PublicAccessBlock policy option for Multicloud object Gateway Copy linkLink copied to clipboard!
An overriding policy option can be created to block public access to the buckets. This increases the compatibility with Amazon S3. This option enables administrators and bucket owners to limit public access to their resources that are enforced regardless of how the resources are created.
For more information, see Configuring or modifying the PublicAccessBlock configuration for S3 bucket.
2.2.4. Additional expiration rules in Multicloud Object Gateway lifecycle configuration Copy linkLink copied to clipboard!
Additional expiration rules are supported to avoid size inflation and manual operations to identify and delete the undesired objects. These rules help the bucket owner to better control the usage of storage and fine-tune the objects that they want to keep or expire. The following rules are supported and these rules are available in the user interface in object browser:
- S3 API for the missing expiration rules.
- NoncurrentVersionExpiration rule
- AbortIncompleteMultipartUpload rule
- ExpiredObjectDeleteMarker rule
For more information, see Lifecycle bucket configuration in Multicloud Object Gateway.
2.3. Automatic scaling of storage for dynamic storage devices Copy linkLink copied to clipboard!
Automatic capacity scaling can be enabled on clusters deployed using dynamic storage devices. When automatic scaling is enabled, additional raw capacity equivalent to the configured deployment size is automatically added to the cluster after the used capacity reaches 70%.
For more information, see the Creating OpenShift Data Foundation cluster section in your respective deployment guides.
2.4. Prevention of unauthorized volume mode conversion Copy linkLink copied to clipboard!
Volume mode conversion is prevented during the volume mode restore when the original volume mode of the persistent volume claim (PVC) for which the snapshot is taken does not match the volume mode of the newly created PVC that is created from the existing volume snapshot. This also helps to verify that conversions work properly.
2.5. Easy configuration of Ceph target size ratio Copy linkLink copied to clipboard!
The target size ratio parameter can be set depending on the cluster usage and how the cluster is expected to fill among the three types of storage: block, shared filesystem, and object storage. The target size ratio is a relative value that influences the allocation of Ceph Placement Groups across storage pools.
For more information, see Configuring Ceph target size ratios.
2.6. Reduce data transfer and improve performance using read affinity for RGW Copy linkLink copied to clipboard!
In Local Storage deployments, the rados_replica_read_policy
is set to localize
for the RADOS Gateway (RGW) daemons. This helps to reduce the data transfer costs and improve performance by routing all the RGW read requests to the nearest OSD. For more information, see Performing localized reads.