Chapter 2. New features


This section describes new features introduced in Red Hat OpenShift Data Foundation 4.16.

2.1. Disaster recovery solution

2.1.1. User interface support for discovered applications in Disaster Recovery

For discovered applications not deployed using RHACM (discovered applications), the OpenShift Data Foundation Disaster Recovery solution extends protection with a new user experience for failover and failback operations that are managed using RHACM.

For more information, see Metro-DR protection for discovered applications and Regional-DR protection for discovered applications.

2.1.2. Disaster recovery solution for Applications that require Kube resource protection with labels

The OpenShift Data Foundation Disaster Recovery solution supports applications that are developed or deployed using an imperative model. The cluster resources for these discovered applications are protected and restored at the secondary cluster using OpenShift APIs for Data Protection (OADP).

For instructions on how to enroll discovered applications, see Enrolling discovered applications for Metro-DR and Enrolling discoverd applications for Regional-DR.

2.1.3. Expand discovered application DR support to multi-namespace Applications

The OpenShift Data Foundation Disaster Recovery solution now extends protection to discovered applications that span across multiple namespaces.

2.1.4. OpenShift virtualization workloads for Regional-DR

Regional disaster recovery (Regional-DR) solution can be easily set up for OpenShift Virtualization workloads using OpenShift Data Foundation.

For more information, see the knowledgebase article, Use OpenShift Data Foundation Disaster Recovery to Protect Virtual Machines.

2.1.5. OpenShift virtualization in a stretch cluster

Disaster recovery with stretch clusters for workloads based on OpenShift Virtualization technology using OpenShift Data Foundation can now be easily set up.

For more information, see the OpenShift Virtualization in OpenShift Container Platform guide.

2.1.6. Recovering to a replacement cluster for Regional-DR

When a primary or a secondary cluster of Regional-DR fails, the cluster can be either repaired or wait for the recovery of the existing cluster, or replace the cluster entirely if the cluster is irredeemable. OpenShift Data Foundation provides the ability to replace a failed primary or a secondary cluster with a new cluster and enable failover (relocate) to the new cluster.

For more information, see Recovering to a replacement cluster.

2.1.7. Enable monitoring support for ACM Subscription application type

The disaster recovery dashboard on Red Hat Advanced Cluster Management (RHACM) console is extended to display monitoring data for Subscription type applications in addition to ApplicationSet type applications.

Data such as the following can be monitored:

  • Volume replication delays
  • Count of protected Subscription type applications with or without replication issues
  • Number of persistent volumes with replication healthy and unhealthy
  • Application-wise data like the following:

    • Recovery Point Objective (RPO)
    • Last sync time
    • Current DR activity status (Relocating, Failing over, Deployed, Relocated, Failed Over)
  • Application-wise persistent volume count with replication healthy and unhealthy

2.1.8. Hub recovery support for co-situated and neutral site Regional-DR deployments

The Regional disaster recovery solutions of OpenShift Data Foundation now support neutral site deployments and hub recovery of co-situated managed clusters using Red Hat Advanced Cluster Management. For configuring hub recovery setup, a 4th cluster is required which acts as the passive hub. The passive hub cluster can be set up in either one of the following ways:

  • The primary managed cluster (Site-1) can be co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2).
  • The active RHACM hub cluster can be placed in a neutral site (Site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2.

For more information, see Regional-DR chapter on Hub recovery using Red Hat Advanced Cluster Management.

2.2. Weekly cluster-wide encryption key rotation

Security common practices require periodic encryption key rotation. OpenShift Data Foundation automatically rotates the encryption keys stored in kubernetes secret (non-KMS) on a weekly basis.

For more information, see Cluster-wide encryption.

2.3. Support custom taints

Custom taints can be configured using the storage cluster CR by directly adding tolerations under the placement section of the CR. This helps to simplify the process of adding custom taints.

For more information, see the knowledgebase article, How to add toleration for the "non-ocs" taints to the OpenShift Data Foundation pods?

2.4. Support for SELinux mount feature with ReadWriteOncePod access mode

OpenShift Data Foundation now supports SELinux mount feature with ReadWriteOncePod access mode. This feature helps to reduce the time taken to change the SELinux labels of the files and folders in a volume, especially when the volume has many files and is on a remote filesystem such as CephFS.

2.5. Support for ReadWriteOncePod access mode

OpenShift Data Foundation provides ReadWriteOncePod (RWOP) access mode to ensure that only one pod across the whole cluster can read the persistent volume claim (PVC) or write to it.

2.6. Faster client IO or recovery IO during OSD backfill

Client IO or recovery IO can be set to be favored during a maintenance window. Favoring recovery IO over client IO significantly reduces OSD recovery time.

For more information in setting the recovery profile, see Enabling faster client IO or recovery IO during OSD backfill.

2.7. Support for generic ephemeral storage for pods

OpenShift Data Foundation provides support for generic ephemeral volume. This support enables a user to specify generic ephemeral volumes in its pod specification and tie the lifecycle of the PVC with the pod.

2.8. Cross storage class clone

OpenShift Data Foundation provides an ability to move from a storage class with replica 3 to replica 2 or replica 1 while cloning. This helps to reduce storage footprint.

For more information, see Creating a clone.

2.9. Overprovision Level Policy Control

Overprovision control mechanism enables defining a quota on the amount of persistent volume claims (PVCs) consumed from a storage cluster, based on the specific application namespace.

When this overprovision control mechanism is enabled, overprovisioning the PVCs consumed from the storage cluster is prevented.

For more information, see Overprovision level policy control.

2.10. Scaling up an OpenShift Data Foundation cluster by resizing existing OSDs

Scaling up an OpenShift Data Foundation cluster can be done by resizing the existing OSDs instead of adding new capacity. This enables expanding the storage without allocating additional CPU/RAM thereby helping to save on resources. For more information, see Scaling up storage capacity on a cluster by resizing existing OSDs.

Note

Scaling up storage capacity by resizing existing OSDs is not supported with Local Storage Operator deployment mode. Resizing OSDs is supported only for dynamic Persistent Volume Claim (PVC) based OSDs.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.