Release notes
Highlights of what is new and what has changed with this OpenShift Container Platform release
Abstract
Chapter 1. OpenShift Container Platform 4.21 release notes Copy linkLink copied to clipboard!
Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.
Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.
1.1. About this release Copy linkLink copied to clipboard!
OpenShift Container Platform (RHBA-2026:1481) is now available. This release uses Kubernetes 1.34 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.21 are included in this topic.
OpenShift Container Platform 4.21 clusters are available at https://console.redhat.com/openshift. From the Red Hat Hybrid Cloud Console, you can deploy OpenShift Container Platform clusters to either on-premises or cloud environments.
You must use RHCOS machines for the control plane and for the compute machines.
Starting from OpenShift Container Platform 4.14, the Extended Update Support (EUS) phase for even-numbered releases increases the total available lifecycle to 24 months on all supported architectures, including x86_64, 64-bit ARM (aarch64), IBM Power® (ppc64le), and IBM Z® (s390x) architectures. Beyond this, Red Hat also offers a 12-month additional EUS add-on, denoted as Additional EUS Term 2, that extends the total available lifecycle from 24 months to 36 months. The Additional EUS Term 2 is available on all architecture variants of OpenShift Container Platform. For more information about support for all versions, see the Red Hat OpenShift Container Platform Life Cycle Policy.
OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards.
1.2. OpenShift Container Platform layered and dependent component support and compatibility Copy linkLink copied to clipboard!
The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy.
1.3. New features and enhancements Copy linkLink copied to clipboard!
This release adds improvements related to the following components and concepts:
1.3.1. API server Copy linkLink copied to clipboard!
- Dynamic updates to storage performance parameters by using VolumeAttributesClass
-
Before this release, updating storage performance parameters such as IOPS or throughput often required manual volume reprovisioning, complex snapshot migrations, or application downtime. With this release, OpenShift Container Platform supports the
VolumeAttributesClass(VAC) API, enabling you to modify and dynamically scale storage parameters by updating the VAC assigned to aPersistentVolumeClaim(PVC). This support allows on-demand performance tuning without service interruption.
1.3.2. Authentication and authorization Copy linkLink copied to clipboard!
- Using the Azure
kubeloginplugin for direct authentication with Microsoft Entra ID Red Hat tested authenticating to OpenShift Container Platform by using the Azure
kubeloginplugin. This validation covers environments where Microsoft Entra ID is configured as the external OIDC provider for direct authentication. The following login modes forkubeloginwere tested:- Device code grant
- Service principal authentication
- Interactive web browser flow
For more information, see Enabling direct authentication with an external OIDC identity provider.
- ConsoleLink support for email links
-
The
ConsoleLinkCustom Resource Definition supportsmailto:links. You can create email links in the OpenShift web console that open the default email client. - Impersonating a user with multiple group memberships in the console
- Cluster administrators can impersonate a user with multiple group memberships at the same time in the OpenShift web console. This supports reproducing effective permissions for RBAC troubleshooting.
- Customizing the Code Editor theme and font size in the console
- With this update, users can customize the Code Editor theme and adjust the font size. By default, the Code Editor theme follows the active OpenShift Console theme, but users can now set it independently. These options improve productivity and reduce the need for frequent changes.
- Quick starts for Trusted Profile Analyzer and Trusted Artifact Signer
- With this update, the OpenShift Console adds two quick starts for Trusted Profile Analyzer and Trusted Artifact Signer to the Overview page, making them easier to find and use. This change simplifies the user journey, improves the user experience, and strengthens product integration by highlighting Red Hat’s security ecosystem within the platform.
- Grouping Helm charts under the Ecosystem header
- Before this update, the OpenShift Console introduced the unified view in 4.19, and the Helm UI displayed in the Admin view but outside the Ecosystem menu. With this update, the OpenShift Console displays Helm under Ecosystem, centralizing software management navigation in one location within the unified view.
1.3.3. Autoscaling Copy linkLink copied to clipboard!
- Network policy support for Autoscaling Operators
The following Operators now have multiple network policies that control network traffic to and from the Operator and operand pods. These policies restrict traffic to only traffic that is explicitly allowed or required.
- Cluster Resource Override Operator
- Cluster Autoscaler
- Vertical Pod Autoscaler
- Horizontal Pod Autoscaler
- Applying VPA recommendations without pod re-creation
-
You can now configure a Vertical Pod Autoscaler Operator (VPA) in the
InPlaceOrRecreatemode. In this mode, the VPA attempts to apply the recommended updates without re-creating pods. If the VPA is unable to update the pods in place, the VPA falls back to re-creating the pods. For more information, see About the Vertical Pod Autoscaler Operator modes. - Cluster Autoscaler Operator can now cordon nodes before removing the node
- By default, when the Cluster Autoscaler Operator removes a node, it does not cordon the node when draining the pods from the node. You can configure the Operator to cordon the node before draining and moving the pods. For more information, see About the cluster autoscaler.
1.3.4. Edge computing Copy linkLink copied to clipboard!
- ClusterInstance CR replaces SiteConfig CR for GitOps ZTP deployments
-
In earlier releases, the
SiteConfigcustom resource (CR) was deprecated. This release removes support for theSiteConfigCR. You must now use theClusterInstanceCR to deploy managed clusters with GitOps ZTP. For more information, see Deploying a managed cluster with ClusterInstance and GitOps ZTP.
1.3.5. etcd Copy linkLink copied to clipboard!
- Manage etcd size by limiting the time-to-live (TTL) duration for Kubernetes events (Technology Preview)
-
With this release, you can manage etcd size by setting the
eventTTLMinutesproperty. Having too many stale Kubernetes events in an etcd database can degrade performance. By setting theeventTTLMinutesproperty, you can specify how long an event can stay in the database before it is purged. For more information, see Managing etcd size by limiting the duration of Kubernetes events.
1.3.6. Extensions (OLM v1) Copy linkLink copied to clipboard!
- Cluster extension support for webhooks in bundles
- With this update, OLM v1 supports Operators that use webhooks for validation, mutation, or conversion. For more information, see Webhook support.
- Support for
SingleNamespaceandOwnNamespaceinstall modes by using the configuration API (Technology Preview) -
If an Operator supports the
SingleNamespaceorOwnNamespaceinstall modes, you can configure the Operator to watch a specified namespace. For more information, see Extension configuration. - OLM v1 software catalog in the web console (Technology Preview)
- With this update, you can preview the OLM v1 software catalog in the web console. Select Ecosystem → Software Catalog → Operators to preview this feature. To see the OLM (Classic) software catalog, click the Enable OLM v1 (Tech Preview) toggle.
1.3.7. IBM Power Copy linkLink copied to clipboard!
- The IBM Power® release on OpenShift Container Platform 4.21 adds improvements and new capabilities to OpenShift Container Platform components
This release introduces support for the following features on IBM Power:
- Enable Installer-Provisioned Infrastructure (IPI) support for PowerVC [Technology Preview]
- Enable Spyre Accelerator on IBM Power®
- The IBM Power® release for OpenShift Container Platform 4.21 adds support for the following operator
- CIFS/SMB CSI Driver Operator
- Kernel Module Management Operator (KMMO)
- Red Hat build of Kueue
When using kdump on IBM Power®, the following limitations apply:
-
Firmware-assisted dump (
fadump) is not supported. - Persistent memory dump is not supported.
1.3.8. IBM Z and IBM LinuxONE Copy linkLink copied to clipboard!
- The IBM Z® and IBM® LinuxONE release on OpenShift Container Platform 4.21 adds improvements and new capabilities to OpenShift Container Platform components
This release introduces support for the following features on IBM Z® and IBM® LinuxONE:
- Enable Spyre Accelerator on IBM Z®
- The IBM Z® release for OpenShift Container Platform 4.21 adds support for the following operator
- Kernel Module Management Operator (KMMO)
- Red Hat build of Kueue
1.3.9. Installation and update Copy linkLink copied to clipboard!
- Restricting service account impersonation to the compute nodes service account
When you install a Google Cloud and configure it to use Google Cloud Workload Identity, you can now restrict the Google Cloud
iam.serviceAccounts.actAspermission that the Cloud Credential Operator utility grants the Machine API controller service account at the project level to only the compute nodes service account.For more information, see Restricting service account impersonation to the compute nodes service account.
- Configuring image mode for OpenShift during installation is now supported
- You can now apply a custom layered image to your nodes during OpenShift Container Platform installation. For more information, see Applying a custom layered image during OpenShift Container Platform installation.
- Installing a cluster on Google Cloud with a user-provisioned DNS is generally available
You can enable a user-provisioned domain name server (DNS) instead of the default cluster-provisioned DNS solution. For example, your organization’s security policies might not allow the use of public DNS services such as Google Cloud DNS. You can manage your DNS only for the IP addresses of the API and Ingress servers. If you use this feature, you must provide your own DNS solution that includes records for
api.<cluster_name>.<base_domain>.and*.apps.<cluster_name>.<base_domain>..Installing a cluster on Google Cloud with a user-provisioned DNS was introduced in OpenShift Container Platform 4.19 with Technology Preview status. In OpenShift Container Platform 4.21, it is now generally available.
For more information, see Enabling a user-managed DNS and Provisioning your own DNS records.
- Installing a cluster on Microsoft Azure uses Marketplace images by default
- As of this update, the OpenShift Container Platform installation program uses Marketplace images by default when installing a cluster on Azure. This speeds up the installation by removing the need to upload a virtual hard disk to Azure and create an image during installation. This feature is not supported on Azure Stack Hub, or for Azure installations that use Confidential VMs.
- The ccoctl utility supports preserving custom Microsoft Azure role assignments
-
The Cloud Credential Operator utility (
ccoctl) can now preserve custom role assignments by using the--preserve-existing-rolesflag. Previously, the tool removed role assignments that were not defined in the CredentialsRequest, including those manually added by administrators. - Managing your own firewall rules when installing a cluster on Google Cloud into an existing VPC
As of this update, you can manage your own firewall rules when installing a cluster on Google Cloud into an existing VPC by enabling the
firewallRulesManagementparameter in theinstall-config.yamlfile. You can limit the permissions that you grant to the installation program by managing your own firewall rules.For more information, see Managing your own firewall rules.
- The
ccoctlutility supports Amazon Web Services permissions boundaries -
The Cloud Credential Operator utility (
ccoctl) now supports attaching an AWS permissions boundary to the IAM roles that it creates. You can use this feature to meet organizational security requirements that restrict the maximum permissions of created roles. - Throughput customization for Amazon Web Services gp3 drives
With this update, you can now customize the maximum throughput for gp3
rootVolumedrives when installing a cluster on Amazon Web Services. This customization is set by modifying thecompute.platform.aws.rootVolume.throughputorcontrolPlane.platform.aws.rootVolume.throughputparameters in theinstall-config.yamlfile.For more information, see Optional AWS configuration parameters.
- Support for VMware vSphere Foundation 9 and VMware Cloud Foundation 9
You can now install OpenShift Container Platform on VMware vSphere Foundation (VVF) 9 and VMware Cloud Foundation (VCF) 9.
NoteThe following additional VCF and VVF components are outside the scope of Red Hat support:
- Management: VCF Operations, VCF Automation, VCF Fleet Management, and VCF Identity Broker.
- Networking: VMware NSX Container Plugin (NCP).
- Migration: VMware HCX.
- Support for installing OpenShift Container Platform on Oracle Database Appliance (ODA)
With this update, you can install a cluster on Oracle Database Appliance using the Assisted Installer.
For more information, see Installing a cluster on Oracle Database Appliance by using the Assisted Installer.
- Installing a cluster on AWS with a user-provisioned DNS (Technology Preview)
You can enable a user-provisioned domain name server (DNS) instead of the default cluster-provisioned DNS solution. For example, your organization’s security policies might not allow the use of public DNS services such as Amazon Web Services (AWS) DNS. As a result, you can manage the API and Ingress DNS records in your own system rather than adding the records to the DNS of the cloud. If you use this feature, you must provide your own DNS solution that includes records for
api.<cluster_name>.<base_domain>.and*.apps.<cluster_name>.<base_domain>.. Enabling a user-provisioned DNS is available as a Technology Preview feature.For more information, see Enabling a user-managed DNS and Provisioning your own DNS records.
- Installing a cluster on Microsoft Azure using NAT Gateways
With this update, you can install a cluster on Azure using NAT Gateways as your outbound routing strategy. NAT Gateways can minimize the risk of SNAT port exhaustion that can occur with other outbound routing strategies. You can configure NAT Gateways using the
platform.azure.outboundTypeparameter in theinstall-config.yamlfile.For more information, see Additional Azure configuration parameters.
- Installing a cluster using Google Cloud private and restricted API endpoints
With this release, you can use Google Cloud Private Service Connect (PSC) endpoints when installing your OpenShift Container Platform cluster so that your installation meets your organization’s strict regulatory policies.
For more information, see Optional configuration parameters.
- Dell iDRAC10 supported for bare metal installation using Redfish virtual media
- Dell iDRAC10 versions 1.20.25.00, 1.20.60.50, and 1.20.70.50 have been tested and verified to work for installer-provisioned OpenShift Container Platform clusters deployed by using Redfish virtual media. iDRAC10 has not been tested with installations that use a provisioning network. For more information, see Firmware requirements for installing with virtual media.
- Providing a local or self-signed CA certificate for Baseboard Management Controllers (BMCs) when installing a cluster on bare metal
With this update, you can provide your own local or self-signed CA certificate to secure communication with BMCs when installing a cluster on bare metal. You can configure this certificate using the
platform.baremetal.bmcCACertparameter in the install-config.yaml file. If you do not use a trusted CA certificate, you can secure BMC communication by providing your own CA certificate. You can also configure a local or self-signed CA certificate after installation, whether the cluster was installed with a different BMC CA certificate or with no BMC CA certificate.For more information, see Additional installation configuration parameters and Configuring a local or self-signed Baseboard Management Controller CA certificate.
- Running firmware upgrades for hosts in deployed bare metal clusters (Generally Available)
For hosts in deployed bare metal clusters, you can update firmware attributes and the firmware image. As a result, you can run firmware upgrades and update BIOS settings for hosts that are already provisioned without fully deprovisioning them. Performing a live update to the
HostFirmwareComponents,HostFirmareSettings, orHostUpdatePolicyresource can be a destructive and destabilizing action. Perform these updates only after careful consideration.This feature was introduced in OpenShift Container Platform 4.18 with Technology Preview status. This feature is now supported as generally available in OpenShift Container Platform 4.21.
For more information, see Performing a live update to the HostFirmwareSettings resource, Performing a live update to the HostFirmwareComponents resource, and Setting the HostUpdatePolicy resource.
- Testing for Amazon Web Services m7 instance type
- As of OpenShift Container Platform 4.21, m7 instance types have been tested for installations on Amazon Web Services. For more information about tested instance types, see Tested instance types for AWS.
- Installing a cluster on Microsoft Azure with a user-provisioned DNS (Technology Preview)
You can enable a user-provisioned domain name server (DNS) instead of the default cluster-provisioned DNS solution. For example, your organization’s security policies might not allow the use of public DNS services such as Microsoft Azure DNS. You can manage your DNS only for the IP addresses of the API and Ingress servers. If you use this feature, you must provide your own DNS solution that includes records for
api.<cluster_name>.<base_domain>.and*.apps.<cluster_name>.<base_domain>.. Enabling a user-provisioned DNS is available as a Technology Preview feature.For more information, see Enabling a user-managed DNS and Provisioning your own DNS records.
1.3.10. Machine Config Operator Copy linkLink copied to clipboard!
- Boot image management for Azure and vSphere clusters promoted to GA
- Updating boot images has been promoted to GA for Microsoft Azure and VMware vSphere clusters. For more information, see Boot image management.
- Configuring image mode for OpenShift during installation is now supported
- You can now apply a custom layered image to your nodes during OpenShift Container Platform installation. For more information, see Applying a custom layered image during OpenShift Container Platform installation.
- Image mode for OpenShift status reporting improvements
-
The output of the
oc describe machineconfignodes <mcp_name>now contains anImageBuildDegradederror that indicates if an image mode for OpenShift failed. For more information, see About node status during updates. - Image mode for OpenShift status reporting improvements (Technology preview)
The
oc describe machineconfigpool <mcp_name>output, as a Technology Preview feature, now includes the following fields that report the status of machine config updates when image mode for OpenShift is enabled:-
Spec.ConfigImage.DesiredImage. This is the desired image for that node. -
Status.ConfigImage.CurrentImage. This is the current image on that node. -
Status.Conditions.ImagePulledFromRegistry. This reports whether an image is able to pull correctly in an image mode update.
-
For more information, see About node status during updates.
- Boot image management for control plane nodes is now supported (Technology Preview)
- Updating boot images is now supported as a Technology Preview feature for VMware vSphere clusters. This feature allows you to configure your cluster to update the node boot image whenever you update your cluster. Previously, updating boot images was supported for worker nodes. For more information, see Boot image management.
- Overriding storage or partition setup (Technology preview)
-
You can now use a
MachineConfigobject to change the installed disk partition schema, file systems, and RAID configurations for new nodes. Previously, for security reasons, you were blocked from changing these configurations from what was established during the cluster installation. For more information, see "Overriding storage and partition setup".
1.3.11. Machine management Copy linkLink copied to clipboard!
- Creating Google Cloud Spot VMs by using compute machine sets
With this release, OpenShift Container Platform supports deploying Machine API compute machines on Spot VMs in Google Cloud clusters. Google Cloud recommends using Spot VMs over their predecessor, preemptible VMs, because they include new features that preemptible VMs do not support.
For more information, see Machine sets that deploy machines as Spot VMs.
- Additional control plane machine set failure domain options for Azure
This release includes additional configuration options for control plane machine set failure domains on Microsoft Azure.
For more information, see Sample Azure failure domain configuration.
- Configuration of throughput for Amazon Web Services gp3 volumes on EBS devices
This release includes support for configuring the maximum throughput for gp3 drives on EBS devices for clusters installed on Amazon Web Services (AWS).
For more information, see Configuring storage throughput for gp3 drives (Machine API) and Configuring storage throughput for gp3 drives (Cluster API).
- Bare-metal nodes on VMware vSphere clusters (Technology Preview)
- You can now add bare-metal compute machines to an existing OpenShift Container Platform cluster on vSphere. This capability enables you to migrate workloads to physical hardware without reinstalling the cluster. For instructions on adding these machines, see Adding bare-metal compute machines to a vSphere cluster.
1.3.12. Monitoring Copy linkLink copied to clipboard!
The monitoring stack documentation is now available as a separate documentation set. The 4.21 monitoring release notes are available at Release notes for OpenShift monitoring.
1.3.13. Networking Copy linkLink copied to clipboard!
- MetalLB Operator status reporting
You can now use enhanced MetalLB Operator reporting features to view real-time operational data for IP address allocation and Border Gateway Protocol (BGP) connectivity. Previously, viewing this information required manual log inspection across multiple controllers. With this release, you can monitor your network health and resolve connectivity issues directly through the following custom resources:
-
IPAddressPool: Monitor cluster-wide IP address allocation through thestatusfield to track usage and prevent address exhaustion. -
ServiceBGPStatus: Verify which service IP addresses are announced to specific BGP peers to ensure correct route advertisements. BGPSessionStatus: Check the real-time state of BGP and Bidirectional Forwarding Detection sessions to quickly identify connectivity drops.For more information, see Monitoring MetalLB configuration status.
-
- Applying unassisted holdover for boundary clocks and time synchronous clocks
OpenShift Container Platform 4.20 introduced unassisted holdover for boundary clocks and time synchronous clocks as a Technology Preview feature. This feature is now Generally Available (GA).
For more information, see Applying unassisted holdover for boundary clocks and time slave clocks.
- SR-IOV Operator supports ARM architecture
- The Single Root I/O Virtualization (SR-IOV) Operator can now communicate with ARM hardware. You can now complete tasks such as configure network cards that are already plugged into an ARM server and use these cards in your applications. For instructions on how to search for ARM hardware that the SR-IOV Operator supports, see About Single Root I/O Virtualization (SR-IOV) hardware networks.
- Support for Red Hat OpenShift Service Mesh version 3.2
- OpenShift Container Platform 4.21 updates Service Mesh to version 3.2. This version update incorporates essential CVE fixes and ensures that your OpenShift Container Platform instances receive the latest fixes, features, and enhancements. See the Service Mesh 3.2 release notes for more information.
- PTP Operator introduces GNSS-to-NTP failover for high-precision timing
With this release, the PTP Operator introduces an active GNSS-to-NTP failover configuration to ensure time synchronization continuity in environments requiring extremely high time accuracy.
When the primary Global Navigation Satellite System (GNSS) signal is lost or compromised, for example because of satellite jamming, the system automatically fails over to Network Time Protocol (NTP) to maintain time accuracy. When the GNSS signal is restored, the system automatically recovers back to using GNSS as the primary time source.
This feature is particularly important in telco environments that require high precision time synchronization with built-in redundancy. To enable GNSS to NTP failover, you configure the
PtpConfigresource with thentpfailoverplugin enabled and configure bothchronydandts2phcsettings.For more information, see Configuring GNSS failover to NTP for time synchronization continuity.
- Network policies for additional namespaces
- With this release, OpenShift Container Platform continues to deploy Kubernetes network policies to additional system namespaces to control ingress and egress traffic. It is anticipated that future releases might include network policies for additional system namespaces and Red Hat Operators.
- Ingress network flow analysis with the commatrix plugin
-
With this release, you can use the
commatrixplugin to generate ingress network flow data from your cluster. You can also use the plugin to identify any differences between open ports on the host and expected ingress flows for your environment.
For more information, see Ingress network flow analysis with the commatrix plugin
- Configure the dnsRecordsType parameter (Technology preview)
-
During cluster installation, you can specify the
dnsRecordsTypeparameter in theinstall-config.yamlfile to set if the internal DNS service or an external source provides the necessary records forapi,api-int, andingressDNS records. For more information about DNS requirements, see User-provisioned DNS requirements.
1.3.14. Nodes Copy linkLink copied to clipboard!
- Allocating specific GPUs to pods (DRA) is now generally available
- Attribute-Based GPU Allocation, which allows pods to request GPUs based on specific device attributes by using a Dynamic Resource Allocation (DRA) driver, is now generally available. For more information, see Allocating GPUs to Pods.
- The default
openshiftcluster image policy is now generally available The default
openshiftcluster image policy is now generally available and active by default. For more information, see Manage secure signatures with sigstore.If your OpenShift Container Platform 4.20 or earlier cluster has a cluster image policy named
openshift, the upgrade to OpenShift Container Platform marks the cluster as not updatable (Upgradeable=False) because of this defaultopenshiftcluster image policy. You must remove youropenshiftcluster image policy to clear theUpgradeable=Falsecondition and proceed with the update. You can optionally create your own cluster image policy with a different name before removing youropenshiftcluster image policy.- Support for sigstore BYOPKI is now generally available
- Support for using a certificate from your own public key infrastructure as a Sigstore root of trust is now generally available. For more information, see Manage secure signatures with sigstore.
- Automatically calculate and apply CPU and memory resources for system components
-
OpenShift Container Platform now automatically calculates and reserves a portion of the CPU and memory resources for use by the underlying node and system components. Previously, you needed to enable the feature by creating a
KubeletConfigcustom resource (CR) with theautoSizingReserved: trueparameter. For clusters updated to OpenShift Container Platform 4.21, you can enable the feature by deleting the50-worker-auto-sizing-disabledmachine config. After you delete the machine config, the nodes reboot with the new resource settings. If you manually configured system reserved CPU or memory resources, these settings remain upon update and do not change. For more information on this new feature, see Automatically allocating resources for nodes. - Linux PSI monitoring can now be enabled
-
You can now enable Linux Pressure Stall Information (PSI) monitoring, which makes PSI metrics for CPU, memory, and I/O available for your cluster, by using a
MachineConfigobject. For more information, see Enabling Pressure Stall Information (PSI) monitoring.
1.3.15. OpenShift CLI (oc) Copy linkLink copied to clipboard!
- Signature mirroring enabled by default for oc-mirror v2
-
With this update, the oc-mirror v2 plugin mirrors image signatures by default. This enhancement ensures that image integrity is automatically preserved during the mirroring process without requiring additional configuration. If your environment does not require signature validation, you can manually disable this feature by using the
--remove-signaturescommand-line flag. For more information, see Disabling signature mirroring for oc-mirror plugin v2.
1.3.16. Operator development Copy linkLink copied to clipboard!
- Supported Operator base images
With this release, the following base images for Operator projects are updated for compatibility with OpenShift Container Platform 4.21. The runtime functionality and configuration APIs for these base images are supported for bug fixes and for addressing CVEs.
- The base image for Ansible-based Operator projects
- The base image for Helm-based Operator projects
For more information, see Updating the base image for existing Ansible- or Helm-based Operator projects for OpenShift Container Platform 4.19 and later (Red Hat Knowledgebase).
1.3.17. Postinstallation configuration Copy linkLink copied to clipboard!
- Enabling hardware metrics monitoring on bare-metal clusters (Technology Preview)
With this update, you can enable your cluster to collect hardware metrics from the Redfish-compatible baseboard management controllers of your bare-metal nodes. Metrics include temperature, power consumption, fan status, and drive health. You enable this Technology Preview feature by enabling the Ironic Prometheus Exporter in your cluster as a postinstallation task.
For more information, see Hardware metrics in the Monitoring stack.
1.3.18. Scalability and performance Copy linkLink copied to clipboard!
- Pod-level IRQ affinity introduces housekeeping mode
For latency-sensitive workloads, you can now configure the
irq-load-balancing.crio.iopod annotation to usehousekeepingmode. This mode enables a subset of pinned CPUs to handle system interrupts while isolating the remaining pinned CPUs for latency-sensitive workloads. This reduces the overall CPU footprint by eliminating the need for dedicated housekeeping CPUs for IRQ handling. When you configurehousekeepingmode, the first pinned CPU and its thread siblings handle interrupts for the system.For more information, see Configuring interrupt processing for individual pods.
1.3.19. Storage Copy linkLink copied to clipboard!
- Volume Attributes Classes is generally available
Volume Attributes Classes provide a way for administrators to describe "classes" of storage they offer. Different classes might correspond to different quality-of-service levels. Volume Attributes Classes was introduced in OpenShift Container Platform 4.19, and is now generally available in 4.21.
Volume Attributes Classes is available only with AWS Elastic Block Storage (EBS) and Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI).
You can apply a Volume Attributes Classes to a persistent volume claim (PVC). If a new Volume Attributes Class becomes available in the cluster, you can update the PVC with the new Volume Attributes Classes if needed.
Volume Attributes Classes have parameters that describe volumes belonging to them. If a parameter is omitted, the default is used at volume provisioning. If a user applies the PVC with a different Volume Attributes Class with omitted parameters, the default value of the parameters might be used depending on the CSI driver implementation. For more information, see the related CSI driver documentation.
For more information, see Volume Attributes Classes.
- Azure File CSI supporting snapshots feature is generally available
A snapshot represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can be used to provision a new volume.
OpenShift Container Platform 4.17 introduced volume snapshot support for the Microsoft Azure File Container Storage Interface (CSI) Driver Operator as a Technology Preview feature. In 4.21, this feature is generally available. Also, Azure File snapshots now supports Network File System (NFS) in addition to Server Message Block (SMB).
For more information, see CSI drivers supported by OpenShift Container Platform and CSI volume snapshots.
- Azure File CSI supporting volume cloning feature is generally available
Volume cloning duplicates an existing persistent volume (PV) to help protect against data loss in OpenShift Container Platform. You can also use a volume clone just as you would use any standard volume.
OpenShift Container Platform 4.16 introduced volume cloning for the Microsoft Azure File Container Storage Interface (CSI) Driver Operator as a Technology Preview feature. In 4.21, this feature is generally available. Also, Azure File cloning now supports Network File System (NFS) in addition to Server Message Block (SMB).
For more information, see Azure File CSI Driver Operator and CSI volume cloning.
- oVirt CSI Driver Operator is removed from OpenShift Container Platform 4.21
- Red Hat Virtualization (RHV) as a host platform for OpenShift Container Platform was deprecated in version 4.14 and is no longer supported. In OpenShift Container Platform 4.21, the oVirt CSI Driver Operator is removed.
- CIFS/SMB CSI Driver Operator supports IBM Power
In OpenShift Container Platform 4.21, the CIFS/SMB CSI Driver Operator supports IBM Power (ppc64le).
For more information, see CIFS/SMB CSI Driver Operator.
- Introduction of new field to track the status of volume resize attempts
OpenShift Container Platform 4.19 introduced resizing recovery that stops the expansion controller from indefinitely attempting to expand a volume to an unsupported size request. This feature allows you to recover and provide another smaller resize value for the persistent volume claim (PVC). The new value must be larger than the original volume size.
OpenShift Container Platform 4.21 introduces the
pvc.Status.AllocatedResourceStatusfield, which shows the status of volume resize attempts. If a user changes the size of their PVCs, this new field allows resource quota to be tracked accurately.For more information about resizing volumes, see Expanding persistent volumes.
For more information about recovering when resizing volumes, see Recovering from failure when expanding volumes.
- Mutable CSI node allocatable property (Technical Preview)
This feature allows for dynamically updating the maximum number of storage volumes a node can handle. Without this feature, volume limits are essentially immutable when a node first joins the cluster. If the environment changes—for example, if you attach a new network interface (ENI) that shares a hardware "slot" with your storage—OpenShift Container Platform does not recognize it has fewer slots available for disks, leading to pods becoming stuck.
This feature is only supported on AWS Elastic Block Storage (EBS).
Mutable CSI node allocatable property is supported in OpenShift Container Platform 4.21 as a Technical Preview feature. To enable this feature, you need to enable Feature Gates.
For more information about enabling Technical Preview features, see Feature Gates.
- Reducing permissions while using the GCP PD CSI Driver Operator is generally available
The default installation allows the Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) Driver to impersonate any service account in the Google Cloud project. You can reduce the scope of permissions granted to the GCP PD CSI Driver service account in your Google Cloud project to only the required node service accounts.
For more information about this feature, see Reducing permissions while using the GCP PD CSI Driver Operator.
- Volume group snapshots API updated (Technical Preview)
The API for the Container Storage Interface (CSI) volume group snapshot feature is updated from
v1beta1tov1beta2.This feature is supported at the Technical Preview level.
For more information, see CSI volume group snapshots.
- Updated release of the Secrets Store CSI Driver Operator
- The Secrets Store CSI Driver Operator version v4.21 is now based on the upstream version v1.5.4 release of secrets-store-csi-driver.
1.3.20. Web console Copy linkLink copied to clipboard!
- OLM v1 software catalog in the web console (Technology Preview)
- With this update, you can preview the Operator Lifecycle Manager (OLM) v1 software catalog in the web console. Select Ecosystem → Software Catalog → Operators to preview this feature. To see the OLM (Classic) software catalog, click the Enable OLM v1 (Tech Preview) toggle.
1.4. Notable technical changes Copy linkLink copied to clipboard!
This section includes several technical changes for OpenShift Container Platform 4.21.
- VMware vSphere 7 and VMware Cloud Foundation 4 end of general support
- Broadcom has ended general support for VMware vSphere 7 and VMware Cloud Foundation (VCF) 4. If your existing OpenShift Container Platform cluster is running on either of these platforms, you must plan to migrate or upgrade your VMware infrastructure to a supported version. OpenShift Container Platform supports installation on vSphere 8 Update 1 or later, or VCF 5 or later.
- Bare Metal Operator has read-only root filesystem by default
-
As of this update, the Bare Metal Operator has the
readOnlyRootFilesystemsecurity context setting enabled to meet common hardening recommendations. - MachineSets advertise node architecture for autoscaling
- With this update, MachineSets advertise the architecture of their nodes, which allows the autoscaler to intelligently scale MachineSets in multi-architecture environments.
- Dynamic Resource Allocation plugin is disabled in the NUMA-aware scheduler
-
With this update, the Dynamic Resource Allocation (DRA) is disabled so that the secondary scheduler, managed by
numaresourcesoperator, does not need to handle DRA resources. The Kubernetes project has recently enabled DRA by default, setting specific expectations on the scheduler-plugins project to support watching DRA resources. As a result, the OpenShift topology-aware scheduler (TAS) disables DRA plugin in its profile, and therefore ignores DRA-related custom resources (CRs) when making scheduling decisions.
1.5. Deprecated and removed features Copy linkLink copied to clipboard!
1.5.1. Images deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Cluster Samples Operator | Deprecated | Deprecated | Deprecated |
1.5.2. Installation deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Deprecated | Deprecated | Deprecated |
|
CoreDNS wildcard queries for the | Deprecated | Deprecated | Deprecated |
|
| Deprecated | Deprecated | Deprecated |
|
| Deprecated | Deprecated | Deprecated |
|
| Deprecated | Deprecated | Deprecated |
| Package-based RHEL compute machines | Removed | Removed | Removed |
|
| Deprecated | Deprecated | Deprecated |
| Installing a cluster on AWS with compute nodes in AWS Outposts | Deprecated | Deprecated | Deprecated |
|
Deploying managed clusters using | Deprecated | Deprecated | Removed |
| Installing a cluster using Fujitsu iRMC drivers on bare-metal machines | General Availability | General Availability | Deprecated |
1.5.3. Machine Management deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Confidential Computing with AMD Secure Encrypted Virtualization for Google Cloud | General Availability | Deprecated | Deprecated |
| Managing bare-metal machines using Fujitsu iRMC drivers | General Availability | General Availability | Deprecated |
1.5.4. Networking deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| iptables | Deprecated | Deprecated | Deprecated |
1.5.5. Node deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Deprecated | Deprecated | Deprecated |
|
Kubernetes topology label | Deprecated | Deprecated | Deprecated |
|
Kubernetes topology label | Deprecated | Deprecated | Deprecated |
| cgroup v1 | Removed | Removed | Removed |
1.5.6. OpenShift CLI (oc) deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| oc-mirror plugin v1 | Deprecated | Deprecated | Deprecated |
| Docker v2 registries | General Availability | Deprecated | Deprecated |
1.5.7. Operator lifecycle and development deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Operator SDK | Removed | Removed | Removed |
| Scaffolding tools for Ansible-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Helm-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Go-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Hybrid Helm-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Java-based Operator projects | Removed | Removed | Removed |
| SQLite database format for Operator catalogs | Deprecated | Deprecated | Deprecated |
1.5.8. Storage deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Shared Resources CSI Driver Operator | Removed | Removed | Removed |
1.5.9. Web console deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Deprecated | Deprecated | Deprecated |
| Patternfly 4 | Removed | Removed | Removed |
1.5.10. Workloads deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Deprecated | Deprecated | Deprecated |
1.6. Deprecated features Copy linkLink copied to clipboard!
- Deprecation of Fujitsu Integrated Remote Management Controller (iRMC) driver for bare-metal machines
As of OpenShift Container Platform 4.21, support for the Fujitsu iRMC baseboard management controller (BMC) driver has been deprecated and will be removed in a future release. If a
BareMetalHostresource contains a BMC address withirmc://as its URI scheme, the resource must be updated to use another BMC scheme, such asredfish://oripmi://. Once support for this driver is removed, hosts that useirmc://URI schemes will become unmanageable.For information about updating the
BareMetalHostresource, see Editing a BareMetalHost resource.
1.7. Removed features Copy linkLink copied to clipboard!
This section includes removed features for OpenShift Container Platform 4.21.
- Swap memory support has been removed
- The ability to configure swap memory is no longer available in OpenShift Container Platform because using swap memory can prevent a node from properly safeguarding itself from failures.
1.8. Fixed issues Copy linkLink copied to clipboard!
The following issues are fixed for this release:
1.8.1. Installer Copy linkLink copied to clipboard!
-
Before this update, the vSphere platform configuration lacked a validation check to prevent the simultaneous definition of both a custom virtual machine template and a
clusterOSImageparameter. As a consequence, users could provide both parameters in the installation configuration, leading to ambiguity and potential deployment failures. With this release, the vSphere validation logic has been updated to ensure that template andclusterOSImageparameters are treated as mutually exclusive, returning a specific error message if both fields are populated. (OCPBUGS-63584) - Before this update, a race condition occurred when multiple reconciliation loops or concurrent processes attempted to add a virtual machine (VM) to a vSphere Host Group simultaneously due to the provider lacking a check to see if the VM was already a member. Consequently, the vSphere API could return errors during the cluster reconfiguration task, leading to reconciliation failures and preventing the VM from being correctly associated with its designated zone or host group. With this release, the zonal logic has been updated to verify the VM’s membership within the target host group before initiating a reconfiguration task, ensuring the operation is only performed if the VM is not already present. (OCPBUGS-60765)
1.8.2. Kube Controller Manager Copy linkLink copied to clipboard!
-
Before this update,
relatedObjectsin theClusterOperatorCustom Resource object omitted theClusterRoleBindingobject, resulting in incomplete debugging information. With this release, the related objects forClusterOperatorare expanded to includeClusterRoleBinding. As a result, thekube-controller-managerClusterRoleBindingis included in theClusterOperatoroutput. (OCPBUGS-65502)
1.8.3. Kube Scheduler Copy linkLink copied to clipboard!
-
Before this update, the
ClusterOperatorobject failed to referenceClusterRoleBindingin therelatedObjectsarray, and caused theKube-schedulerClusterRoleBindingto be omitted in theClusterOperatoroutput. As a consequence, debugging difficulties occurred. With this release, theClusterRoleBindingfor thekube-schedulerOperator is added to theClusterOperatorrelatedObjects. (OCPBUGS-65503)
1.8.4. Networking Copy linkLink copied to clipboard!
-
Before this update, an incorrect private key containing certificate data caused HAProxy reload failure in OpenShift Container Platform 4.14. As a consequence, incorrect certificate configuration caused HAProxy router pods to fail reloads, which led to a partial outage. With this release,
haproxynow validates certificates. As a result, router reload failures with invalid certificates are prevented. (OCPBUGS-49769) - To maintain compatibility with Kubernetes 1.34, CoreDNS has been updated to version 1.13.1. This update resolves intermittent DNS resolution issues reported in previous versions and includes upstream performance fixes and security patches.
1.8.5. Clock state metrics degrade correctly after upstream clock loss Copy linkLink copied to clipboard!
-
Previously, when the upstream clock connection was lost and the clock entered the
unlockedstate, theptp4landts2phcclock state metrics did not degrade as expected. This behavior caused inconsistent time synchronization state reporting. This issue has been fixed. When the upstream clock connection is lost, theptp4landts2phcclock state metrics now degrade correctly, providing consistent time synchronization state reporting.
1.8.6. Node Tuning Operator Copy linkLink copied to clipboard!
-
Before this update, the Performance Profile Creator tool failed to analyze a
must-gatherarchive if the archive contained a custom namespace directory ending with the suffixnodes. With this release, the PPC now correctly excludes thenamespacesdirectory when processing the must-gather data to create a suggestedPerformanceProfile. (OCPBUGS-60218)
1.8.7. OpenShift API Server Copy linkLink copied to clipboard!
-
Before this update, there was an incorrect error field path for
hostUserswhen validating a security context constraint. It was shown as.spec.securityContext.hostUsers, buthostUsersis not in the security context. With this release, the error field path is now.spec.hostUsers. (OCPBUGS-65727)
1.9. Technology Preview features status Copy linkLink copied to clipboard!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
Technology Preview Features Support Scope
In the following tables, features are marked with the following statuses:
- Not Available
- Technology Preview
- General Availability
- Deprecated
- Removed
1.9.1. Authentication and authorization Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Pod security admission restricted enforcement | Technology Preview | Technology Preview | Technology Preview |
| Direct authentication with an external OIDC identity provider | Technology Preview | General Availability | General Availability |
1.9.2. Edge computing Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Accelerated provisioning of GitOps ZTP | Technology Preview | Technology Preview | Technology Preview |
| Enabling disk encryption with TPM and PCR protection | Technology Preview | Technology Preview | Technology Preview |
| Configuring a local arbiter node | Technology Preview | General Availability | General Availability |
| Configuring a two-node OpenShift Container Platform cluster with fencing | Not Available | Technology Preview | Technology Preview |
1.9.3. Extensions Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Operator Lifecycle Manager (OLM) v1 | General Availability | General Availability | General Availablity |
| OLM v1 runtime validation of container images using sigstore signatures | Technology Preview | Technology Preview | Technology Preview |
| OLM v1 permissions preflight check for cluster extensions | Technology Preview | Technology Preview | Technology Preview |
| OLM v1 deploying a cluster extension in a specified namespace | Technology Preview | Technology Preview | Technology Preview |
| OLM v1 deploying a cluster extension that uses webhooks | Not Available | Technology Preview | General Availability |
| OLM v1 software catalog | Not Available | Not Available | Technology Preview |
1.9.4. Installation Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Adding kernel modules to nodes with kvc | Technology Preview | Technology Preview | Technology Preview |
| Enabling NIC partitioning for SR-IOV devices | General Availability | General Availability | General Availability |
| User-defined labels and tags for Google Cloud | General Availability | General Availability | General Availability |
| Installing a cluster on Alibaba Cloud by using Assisted Installer | Technology Preview | Technology Preview | Technology Preview |
| Installing a cluster on Microsoft Azure with confidential VMs | General Availability | General Availability | General Availability |
| Dedicated disk for etcd on Microsoft Azure | Not Available | Technology Preview | Technology Preview |
| Mount shared entitlements in BuildConfigs in RHEL | Technology Preview | Technology Preview | Technology Preview |
| OpenShift zones support for vSphere host groups | Technology Preview | Technology Preview | Technology Preview |
| Selectable Cluster Inventory | Technology Preview | Technology Preview | Technology Preview |
| Installing a cluster on Google Cloud using the Cluster API implementation | General Availability | General Availability | General Availability |
| Enabling a user-provisioned DNS on Google Cloud | Technology Preview | Technology Preview | General Availability |
| Enabling a user-provisioned DNS on Microsoft Azure | Not Available | Not Available | Technology Preview |
| Enabling a user-provisioned DNS on Amazon Web Services (AWS) | Not Available | Not Available | Technology Preview |
| Installing a cluster using Google Cloud private and restricted API endpoints | Not Available | Not Available | General Availability |
| Installing a cluster on VMware vSphere with multiple network interface controllers | Technology Preview | General Availability | General Availability |
| Using bare metal as a service | Technology Preview | Technology Preview | Technology Preview |
| Running firmware upgrades for hosts in deployed bare metal clusters | Technology Preview | Technology Preview | General Availability |
| Changing the CVO log level | Not Available | Technology Preview | Technology Preview |
1.9.5. Machine Config Operator Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Boot image management for Azure and vSphere | Not available | Technology Preview | General Availability |
| Boot image management for control plane nodes | Not available | Not available | Technology Preview |
| image mode for OpenShift status reporting improvements | Not available | Not available | Technology Preview |
| Overriding storage or partition setup | Not available | Not available | Technology Preview |
1.9.6. Machine management Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Managing machines with the Cluster API for Amazon Web Services | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for Google Cloud | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for IBM Power® Virtual Server | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for Microsoft Azure | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for RHOSP | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for VMware vSphere | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for bare metal | Technology Preview | Technology Preview | Technology Preview |
| Cloud controller manager for IBM Power® Virtual Server | Technology Preview | Technology Preview | Technology Preview |
| Adding multiple subnets to an existing VMware vSphere cluster by using compute machine sets | Technology Preview | Technology Preview | Technology Preview |
| Configuring Trusted Launch for Microsoft Azure virtual machines by using machine sets | General Availability | General Availability | General Availability |
| Configuring Azure confidential virtual machines by using machine sets | General Availability | General Availability | General Availability |
| Bare-metal nodes on VMware vSphere clusters | Not Available | Not Available | Technology Preview |
1.9.7. Multi-Architecture Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Technology Preview | General Availability | General Availability |
|
| Technology Preview | General Availability | General Availability |
|
| Technology Preview | General Availability | General Availability |
| Support for configuring the image stream import mode behavior | Technology Preview | Technology Preview | Technology Preview |
1.9.8. Networking Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| eBPF manager Operator | Technology Preview | Technology Preview | Technology Preview |
| Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses | Technology Preview | Technology Preview | Technology Preview |
| Updating the interface-specific safe sysctls list | Technology Preview | Technology Preview | Technology Preview |
| Egress service custom resource | Technology Preview | Technology Preview | Technology Preview |
|
VRF specification in | Technology Preview | Technology Preview | Technology Preview |
|
VRF specification in | General Availability | General Availability | General Availability |
| Host network settings for SR-IOV VFs | General Availability | General Availability | General Availability |
| Integration of MetalLB and FRR-K8s | General Availability | General Availability | General Availability |
| Automatic leap seconds handling for PTP grandmaster clocks | General Availability | General Availability | General Availability |
| PTP events REST API v2 | General Availability | General Availability | General Availability |
|
OVN-Kubernetes customized | General Availability | General Availability | General Availability |
|
OVN-Kubernetes customized | Technology Preview | Technology Preview | Technology Preview |
| Live migration to OVN-Kubernetes from OpenShift Container Platform SDN | Not Available | Not Available | Not Available |
| User-defined network segmentation | General Availability | General Availability | General Availability |
| Dynamic configuration manager | Technology Preview | Technology Preview | Technology Preview |
| SR-IOV Network Operator support for Intel C741 Emmitsburg Chipset | Technology Preview | Technology Preview | Technology Preview |
| SR-IOV Network Operator support on ARM architecture | General Availability | General Availability | General Availability |
| Gateway API and Istio for Ingress management | General Availability | General Availability | General Availability |
| Dual-port NIC for PTP ordinary clock | Technology Preview | Technology Preview | Technology Preview |
| DPU Operator | Technology Preview | Technology Preview | Technology Preview |
| Fast IPAM for the Whereabouts IPAM CNI plugin | Technology Preview | Technology Preview | Technology Preview |
| Unnumbered BGP peering | Technology Preview | General Availability | General Availability |
| Load balancing across the aggregated bonded interface with xmitHashPolicy | Not Available | Technology Preview | Technology Preview |
| PF Status Relay Operator for high availability with SR-IOV networks | Not Available | Technology Preview | Technology Preview |
| Preconfigured user-defined network end points using MTV | Not Available | Technology Preview | Technology Preview |
| Unassisted holdover for PTP devices | Not Available | Technology Preview | General Availability |
1.9.9. Node Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Technology Preview | Technology Preview | Technology Preview |
| sigstore support | Technology Preview | General Availability | General Availability |
|
Default sigstore | Technology Preview | Technology Preview | General Availability |
| Linux user namespace support | Technology Preview | General Availability | General Availability |
| Attribute-Based GPU Allocation | Not Available | Technology Preview | General Availability |
1.9.10. OpenShift CLI (oc) Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| oc-mirror plugin v2 | General Availability | General Availability | General Availability |
| oc-mirror plugin v2 enclave support | General Availability | General Availability | General Availability |
| oc-mirror plugin v2 delete functionality | General Availability | General Availability | General Availability |
1.9.11. Operator lifecycle and development Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Operator Lifecycle Manager (OLM) v1 | General Availability | General Availability | General Availability |
| Scaffolding tools for Hybrid Helm-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Java-based Operator projects | Removed | Removed | Removed |
1.9.12. Red Hat OpenStack Platform (RHOSP) Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| RHOSP integration into the Cluster CAPI Operator | Technology Preview | Technology Preview | Technology Preview |
| Hosted control planes on RHOSP 17.1 | Technology Preview | Technology Preview | Technology Preview |
1.9.13. Scalability and performance Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| factory-precaching-cli tool | Technology Preview | Technology Preview | Technology Preview |
| Hyperthreading-aware CPU manager policy | Technology Preview | Technology Preview | Technology Preview |
| Mount namespace encapsulation | Technology Preview | Technology Preview | Technology Preview |
| Node Observability Operator | Technology Preview | Technology Preview | Technology Preview |
| Increasing the etcd database size | Technology Preview | Technology Preview | Technology Preview |
|
Managing etcd size by setting the | Not available | Not available | Technology Preview |
|
Using RHACM | General Availability | General Availability | General Availability |
| Pinned Image Sets | Technology Preview | Technology Preview | Technology Preview |
| Configuring NUMA-aware scheduler replicas and high availability | Not available | Technology Preview | Technology Preview |
1.9.14. Storage Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| AWS EFS One Zone volume | Not Available | General Availability | General Availability |
| Automatic device discovery and provisioning with Local Storage Operator | Technology Preview | Technology Preview | Technology Preview |
| Azure File CSI cloning support | Technology Preview | Technology Preview | General Availability |
| Azure File CSI snapshot support | Technology Preview | Technology Preview | General Availability |
| Azure File cross-subscription support | General Availability | General Availability | General Availability |
| Azure Disk performance plus | Not Available | General Availability | General Availability |
| Configuring fsGroupChangePolicy per namespace | Not Available | General Availability | General Availability |
| Shared Resources CSI Driver in OpenShift Builds | Technology Preview | Technology Preview | Technology Preview |
| Secrets Store CSI Driver Operator | General Availability | General Availability | General Availability |
| CIFS/SMB CSI Driver Operator | General Availability | General Availability | General Availability |
| VMware vSphere multiple vCenter support | General Availability | General Availability | General Availability |
| Disabling/enabling storage on vSphere | General Availability | General Availability | General Availability |
| Increasing max number of volumes per node for vSphere | Technology Preview | Technology Preview | Technology Preview |
| RWX/RWO SELinux mount option | Developer Preview | Technology Preview | Technology Preview |
| Migrating CNS Volumes Between Datastores | General Availability | General Availability | General Availability |
| CSI volume group snapshots | Technology Preview | Technology Preview | Technology Preview |
| GCP PD supports C3/N4 instance types and hyperdisk-balanced disks | General Availability | General Availability | General Availability |
| OpenStack Manila support for CSI resize | General Availability | General Availability | General Availability |
| Volume Attribute Classes | Technology Preview | Technology Preview | General Availability |
| Volume populators | Technology Preview | General Availability | General Availability |
1.9.15. Web console Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Red Hat OpenShift Lightspeed in the OpenShift Container Platform web console | Technology Preview | Technology Preview | Technology Preview |
1.10. Known issues Copy linkLink copied to clipboard!
This section includes several known issues for OpenShift Container Platform 4.21.
- Currently, due to a known issue, the OpenShift Container Platform 4.21 versions of the Cluster Resource Override Operator and the DPU Operator will be available in an upcoming 4.21 maintenance release. (OCPBUGS-74224)
If you mirrored the OpenShift Container Platform release images to the registry of a disconnected environment by using the
oc adm release mirrorcommand, the release image Sigstore signature is not mirrored with the image.This has become an issue in OpenShift Container Platform 4.21, because the
openshiftcluster image policy is now deployed to the cluster by default. This policy causes CRI-O to automatically verify the Sigstore signature when pulling images into a cluster. (OCPBUGS-70297)With the absence of the Sigstore signature, after updating to OpenShift Container Platform 4.21 on a disconnected environment, future Cluster Version Operator pods might fail to run. You can avoid this problem by installing the oc-mirror plugin v2 and using the
oc mirrorcommand to mirror the OpenShift Container Platform release image. The oc-mirror plugin v2 mirrors both the release image and its Sigstore signature from your mirror registry to your disconnected environment.If you cannot use the oc-mirror plugin v2, you can use the
oc image mirrorcommand to mirror the Sigstore signature into your mirror registry by using a command similar to the following:oc image mirror "quay.io/openshift-release-dev/ocp-release:${RELEASE_DIGEST}.sig" "${LOCAL_REGISTRY}/${LOCAL_RELEASE_IMAGES_REPOSITORY}:${RELEASE_DIGEST}.sig"$ oc image mirror "quay.io/openshift-release-dev/ocp-release:${RELEASE_DIGEST}.sig" "${LOCAL_REGISTRY}/${LOCAL_RELEASE_IMAGES_REPOSITORY}:${RELEASE_DIGEST}.sig"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
RELEASE_DIGEST-
Specifies your digest image with the
:character replaced by a-character. For example:sha256:884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdeebecomessha256-884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdee.sig.
For information on the oc-mirror v2 plugin, see Mirroring images for a disconnected installation by using the oc-mirror plugin v2.
-
Starting with OpenShift Container Platform 4.21, there is a decrease in the default maximum open files soft limit for containers. As a consequence, end users might experience application failures. To work around this problem, increase the container runtimes (CRI-O) ulimit configuration using a method of your choice, such as the
ulimitcommand. (OCPBUGS-62095) - Currently, on clusters with SR-IOV network virtual functions configured, a race condition might occur between system services responsible for network device renaming and the TuneD service managed by the Node Tuning Operator. As a consequence, the TuneD profile might become degraded after the node restarts, leading to performance degradation. As a workaround, restart the TuneD pod to restore the profile state. (OCPBUGS-41934)
-
Currently, pods that use a
guaranteedQoS class and request whole CPUs might not restart automatically after a node reboot or kubelet restart. The issue might occur in nodes configured with a static CPU Manager policy and using thefull-pcpus-onlyspecification, and when most or all CPUs on the node are already allocated by such workloads. As a workaround, manually delete and re-create the affected pods. (OCPBUGS-43280) -
On systems using specific AMD EPYC processors, some low-level system interrupts, for example
AMD-Vi, might contain CPUs in the CPU mask that overlaps with CPU-pinned workloads. This behavior is because of the hardware design. These specific error-reporting interrupts are generally inactive and there is currently no known performance impact.(OCPBUGS-57787) While Day 2 firmware updates and BIOS attribute reconfiguration for bare-metal hosts are generally available with this release, the Bare Metal Operator (BMO) does not provide a native mechanism to cancel a firmware update request once initiated. If a firmware update or setting change for
HostFirmwareComponentsorHostFirmwareSettingsresources fails, returns an error, or becomes indefinitely stuck, you can try to recover by using the following steps:-
Removing the changes to the
HostFirmwareComponentsandHostFirmwareSettingsresources. -
Setting the node to
online: falseto trigger a reboot. - If the issue persists, deleting the Ironic pod.
A native abort capability for servicing operations might be planned for a future release.
-
Removing the changes to the
- There is a known issue with the ability to configure the maximum throughput of gp3 storage volumes in an AWS cluster. This feature does not work with control plane machine sets. There is no workaround for this issue, but it is planned to be fixed in a later release. (OCPBUGS-74478)
- When installing a private cluster on Google Cloud behind a proxy with user-provisioned DNS, you might encounter installation errors indicating the bootstrap failed to complete or the cluster initialization failed. In both cases, the installation can succeed, resulting in a healthy cluster. As a workaround, install the private cluster on a bastion host that is within the same virtual private cloud (VPC) as the cluster to be deployed. (OCPBUGS-54901)
1.11. Asynchronous errata updates Copy linkLink copied to clipboard!
Security, bug fix, and enhancement updates for OpenShift Container Platform 4.21 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.21 errata is available on the Red Hat Customer Portal. See the OpenShift Container Platform Life Cycle for more information about asynchronous errata. Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate.
This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.21. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.21.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow.
For any OpenShift Container Platform release, always review the instructions on updating your cluster properly.
Chapter 2. Additional release notes Copy linkLink copied to clipboard!
Release notes for additional related components and products not included in the core OpenShift Container Platform 4.21 release notes are available in the following documentation.
The following release notes are for downstream Red Hat products only; upstream or community release notes for related products are not included.
- A
- AWS Load Balancer Operator
- B
- Builds for Red Hat OpenShift
- C
cert-manager Operator for Red Hat OpenShift
- D
- Red Hat Developer Hub Operator
- E
- F
- File Integrity Operator
- K
- L
- M
- N
- O
OpenShift API for Data Protection (OADP)
Red Hat OpenShift Distributed Tracing Platform
Red Hat OpenShift Local (Upstream CRC documentation)
OpenShift sandboxed containers
Red Hat OpenShift Service Mesh 2.x
Red Hat OpenShift Service Mesh 3.x
- P
- Power monitoring for Red Hat OpenShift
- R
- Run Once Duration Override Operator
- S
- W
- Red Hat OpenShift support for Windows Containers
- Z
- Zero Trust Workload Identity Manager
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.