Release notes
Highlights of what is new and what has changed with this OpenShift Container Platform release
Abstract
Chapter 1. OpenShift Container Platform 4.21 release notes Copy linkLink copied to clipboard!
Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.
Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.
1.1. About this release Copy linkLink copied to clipboard!
OpenShift Container Platform (RHBA-2026:1481) is now available. This release uses Kubernetes 1.34 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.21 are included in this topic.
OpenShift Container Platform 4.21 clusters are available at https://console.redhat.com/openshift. From the Red Hat Hybrid Cloud Console, you can deploy OpenShift Container Platform clusters to either on-premises or cloud environments.
You must use RHCOS machines for the control plane and for the compute machines.
Starting from OpenShift Container Platform 4.14, the Extended Update Support (EUS) phase for even-numbered releases increases the total available lifecycle to 24 months on all supported architectures, including x86_64, 64-bit ARM (aarch64), IBM Power® (ppc64le), and IBM Z® (s390x) architectures. Beyond this, Red Hat also offers a 12-month additional EUS add-on, denoted as Additional EUS Term 2, that extends the total available lifecycle from 24 months to 36 months. The Additional EUS Term 2 is available on all architecture variants of OpenShift Container Platform. For more information about support for all versions, see the Red Hat OpenShift Container Platform Life Cycle Policy.
OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards.
1.2. OpenShift Container Platform layered and dependent component support and compatibility Copy linkLink copied to clipboard!
The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy.
1.3. New features and enhancements Copy linkLink copied to clipboard!
This release adds improvements related to the following components and concepts:
1.3.1. API server Copy linkLink copied to clipboard!
- Dynamic updates to storage performance parameters by using VolumeAttributesClass
-
Before this release, updating storage performance parameters such as IOPS or throughput often required manual volume reprovisioning, complex snapshot migrations, or application downtime. With this release, OpenShift Container Platform supports the
VolumeAttributesClass(VAC) API, enabling you to modify and dynamically scale storage parameters by updating the VAC assigned to aPersistentVolumeClaim(PVC). This support allows on-demand performance tuning without service interruption.
1.3.2. Authentication and authorization Copy linkLink copied to clipboard!
- Using the Azure
kubeloginplugin for direct authentication with Microsoft Entra ID Red Hat tested authenticating to OpenShift Container Platform by using the Azure
kubeloginplugin. This validation covers environments where Microsoft Entra ID is configured as the external OIDC provider for direct authentication. The following login modes forkubeloginwere tested:- Device code grant
- Service principal authentication
- Interactive web browser flow
For more information, see Enabling direct authentication with an external OIDC identity provider.
- ConsoleLink support for email links
-
The
ConsoleLinkCustom Resource Definition supportsmailto:links. You can create email links in the OpenShift web console that open the default email client. - Impersonating a user with multiple group memberships in the console
- Cluster administrators can impersonate a user with multiple group memberships at the same time in the OpenShift web console. This supports reproducing effective permissions for RBAC troubleshooting.
- Customizing the Code Editor theme and font size in the console
- With this update, users can customize the Code Editor theme and adjust the font size. By default, the Code Editor theme follows the active OpenShift Console theme, but users can now set it independently. These options improve productivity and reduce the need for frequent changes.
- Quick starts for Trusted Profile Analyzer and Trusted Artifact Signer
- With this update, the OpenShift Console adds two quick starts for Trusted Profile Analyzer and Trusted Artifact Signer to the Overview page, making them easier to find and use. This change simplifies the user journey, improves the user experience, and strengthens product integration by highlighting Red Hat’s security ecosystem within the platform.
- Grouping Helm charts under the Ecosystem header
- Before this update, the OpenShift Console introduced the unified view in 4.19, and the Helm UI displayed in the Admin view but outside the Ecosystem menu. With this update, the OpenShift Console displays Helm under Ecosystem, centralizing software management navigation in one location within the unified view.
1.3.3. Autoscaling Copy linkLink copied to clipboard!
- Network policy support for Autoscaling Operators
The following Operators now have multiple network policies that control network traffic to and from the Operator and operand pods. These policies restrict traffic to only traffic that is explicitly allowed or required.
- Cluster Resource Override Operator
- Cluster Autoscaler
- Vertical Pod Autoscaler
- Horizontal Pod Autoscaler
- Applying VPA recommendations without pod re-creation
-
You can now configure a Vertical Pod Autoscaler Operator (VPA) in the
InPlaceOrRecreatemode. In this mode, the VPA attempts to apply the recommended updates without re-creating pods. If the VPA is unable to update the pods in place, the VPA falls back to re-creating the pods. For more information, see About the Vertical Pod Autoscaler Operator modes. - Cluster Autoscaler Operator can now cordon nodes before removing the node
- By default, when the Cluster Autoscaler Operator removes a node, it does not cordon the node when draining the pods from the node. You can configure the Operator to cordon the node before draining and moving the pods. For more information, see About the cluster autoscaler.
1.3.4. Edge computing Copy linkLink copied to clipboard!
- ClusterInstance CR replaces SiteConfig CR for GitOps ZTP deployments
-
In earlier releases, the
SiteConfigcustom resource (CR) was deprecated. This release removes support for theSiteConfigCR. You must now use theClusterInstanceCR to deploy managed clusters with GitOps ZTP. For more information, see Deploying a managed cluster with ClusterInstance and GitOps ZTP. - Support for removing devices and device classes from a volume group
-
You can remove the device paths in the
deviceSelector.pathsfield and thedeviceClassobject from theLVMClusterresource. For more information, see About removing devices and device classes from a volume group
1.3.5. etcd Copy linkLink copied to clipboard!
- Manage etcd size by limiting the time-to-live (TTL) duration for Kubernetes events (Technology Preview)
-
With this release, you can manage etcd size by setting the
eventTTLMinutesproperty. Having too many stale Kubernetes events in an etcd database can degrade performance. By setting theeventTTLMinutesproperty, you can specify how long an event can stay in the database before it is purged. For more information, see Managing etcd size by limiting the duration of Kubernetes events.
1.3.6. Extensions (OLM v1) Copy linkLink copied to clipboard!
- Cluster extension support for webhooks in bundles
- With this update, OLM v1 supports Operators that use webhooks for validation, mutation, or conversion. For more information, see Webhook support.
- Support for
SingleNamespaceandOwnNamespaceinstall modes by using the configuration API (Technology Preview) -
If an Operator supports the
SingleNamespaceorOwnNamespaceinstall modes, you can configure the Operator to watch a specified namespace. For more information, see Extension configuration. - OLM v1 software catalog in the web console (Technology Preview)
- With this update, you can preview the OLM v1 software catalog in the web console. Select Ecosystem → Software Catalog → Operators to preview this feature. To see the OLM (Classic) software catalog, click the Enable OLM v1 (Tech Preview) toggle.
1.3.7. IBM Power Copy linkLink copied to clipboard!
- The IBM Power® release on OpenShift Container Platform 4.21 adds improvements and new capabilities to OpenShift Container Platform components
This release introduces support for the following features on IBM Power:
- Enable Installer-Provisioned Infrastructure (IPI) support for PowerVC [Technology Preview]
- Enable Spyre Accelerator on IBM Power®
- The IBM Power® release for OpenShift Container Platform 4.21 adds support for the following operator
- CIFS/SMB CSI Driver Operator
- Kernel Module Management Operator (KMMO)
- Red Hat build of Kueue
When using kdump on IBM Power®, the following limitations apply:
-
Firmware-assisted dump (
fadump) is not supported. - Persistent memory dump is not supported.
1.3.8. IBM Z and IBM LinuxONE Copy linkLink copied to clipboard!
- The IBM Z® and IBM® LinuxONE release on OpenShift Container Platform 4.21 adds improvements and new capabilities to OpenShift Container Platform components
This release introduces support for the following features on IBM Z® and IBM® LinuxONE:
- Enable Spyre Accelerator on IBM Z®
- The IBM Z® release for OpenShift Container Platform 4.21 adds support for the following operator
- Kernel Module Management Operator (KMMO)
- Red Hat build of Kueue
1.3.9. Installation and update Copy linkLink copied to clipboard!
- Restricting service account impersonation to the compute nodes service account
When you install a Google Cloud and configure it to use Google Cloud Workload Identity, you can now restrict the Google Cloud
iam.serviceAccounts.actAspermission that the Cloud Credential Operator utility grants the Machine API controller service account at the project level to only the compute nodes service account.For more information, see Restricting service account impersonation to the compute nodes service account.
- Configuring image mode for OpenShift during installation is now supported
- You can now apply a custom layered image to your nodes during OpenShift Container Platform installation. For more information, see Applying a custom layered image during OpenShift Container Platform installation.
- Installing a cluster on Google Cloud with a user-provisioned DNS is generally available
You can enable a user-provisioned domain name server (DNS) instead of the default cluster-provisioned DNS solution. For example, your organization’s security policies might not allow the use of public DNS services such as Google Cloud DNS. You can manage your DNS only for the IP addresses of the API and Ingress servers. If you use this feature, you must provide your own DNS solution that includes records for
api.<cluster_name>.<base_domain>.and*.apps.<cluster_name>.<base_domain>..Installing a cluster on Google Cloud with a user-provisioned DNS was introduced in OpenShift Container Platform 4.19 with Technology Preview status. In OpenShift Container Platform 4.21, it is now generally available.
For more information, see Enabling a user-managed DNS and Provisioning your own DNS records.
- Installing a cluster on Microsoft Azure uses Marketplace images by default
- As of this update, the OpenShift Container Platform installation program uses Marketplace images by default when installing a cluster on Azure. This speeds up the installation by removing the need to upload a virtual hard disk to Azure and create an image during installation. This feature is not supported on Azure Stack Hub, or for Azure installations that use Confidential VMs.
- The ccoctl utility supports preserving custom Microsoft Azure role assignments
-
The Cloud Credential Operator utility (
ccoctl) can now preserve custom role assignments by using the--preserve-existing-rolesflag. Previously, the tool removed role assignments that were not defined in the CredentialsRequest, including those manually added by administrators. - Managing your own firewall rules when installing a cluster on Google Cloud into an existing VPC
As of this update, you can manage your own firewall rules when installing a cluster on Google Cloud into an existing VPC by enabling the
firewallRulesManagementparameter in theinstall-config.yamlfile. You can limit the permissions that you grant to the installation program by managing your own firewall rules.For more information, see Managing your own firewall rules.
- The
ccoctlutility supports Amazon Web Services permissions boundaries -
The Cloud Credential Operator utility (
ccoctl) now supports attaching an AWS permissions boundary to the IAM roles that it creates. You can use this feature to meet organizational security requirements that restrict the maximum permissions of created roles. - Throughput customization for Amazon Web Services gp3 drives
With this update, you can now customize the maximum throughput for gp3
rootVolumedrives when installing a cluster on Amazon Web Services. This customization is set by modifying thecompute.platform.aws.rootVolume.throughputorcontrolPlane.platform.aws.rootVolume.throughputparameters in theinstall-config.yamlfile.For more information, see Optional AWS configuration parameters.
- Support for VMware vSphere Foundation 9 and VMware Cloud Foundation 9
You can now install OpenShift Container Platform on VMware vSphere Foundation (VVF) 9 and VMware Cloud Foundation (VCF) 9.
NoteThe following additional VCF and VVF components are outside the scope of Red Hat support:
- Management: VCF Operations, VCF Automation, VCF Fleet Management, and VCF Identity Broker.
- Networking: VMware NSX Container Plugin (NCP).
- Migration: VMware HCX.
- Support for installing OpenShift Container Platform on Oracle Database Appliance (ODA)
With this update, you can install a cluster on Oracle Database Appliance using the Assisted Installer.
For more information, see Installing a cluster on Oracle Database Appliance by using the Assisted Installer.
- Installing a cluster on AWS with a user-provisioned DNS (Technology Preview)
You can enable a user-provisioned domain name server (DNS) instead of the default cluster-provisioned DNS solution. For example, your organization’s security policies might not allow the use of public DNS services such as Amazon Web Services (AWS) DNS. As a result, you can manage the API and Ingress DNS records in your own system rather than adding the records to the DNS of the cloud. If you use this feature, you must provide your own DNS solution that includes records for
api.<cluster_name>.<base_domain>.and*.apps.<cluster_name>.<base_domain>.. Enabling a user-provisioned DNS is available as a Technology Preview feature.For more information, see Enabling a user-managed DNS and Provisioning your own DNS records.
- Installing a cluster on Microsoft Azure using NAT Gateways
With this update, you can install a cluster on Azure using NAT Gateways as your outbound routing strategy. NAT Gateways can minimize the risk of SNAT port exhaustion that can occur with other outbound routing strategies. You can configure NAT Gateways using the
platform.azure.outboundTypeparameter in theinstall-config.yamlfile.For more information, see Additional Azure configuration parameters.
- Installing a cluster using Google Cloud private and restricted API endpoints
With this release, you can use Google Cloud Private Service Connect (PSC) endpoints when installing your OpenShift Container Platform cluster so that your installation meets your organization’s strict regulatory policies.
For more information, see Optional configuration parameters.
- Dell iDRAC10 supported for bare metal installation using Redfish virtual media
- Dell iDRAC10 versions 1.20.25.00, 1.20.60.50, and 1.20.70.50 have been tested and verified to work for installer-provisioned OpenShift Container Platform clusters deployed by using Redfish virtual media. iDRAC10 has not been tested with installations that use a provisioning network. For more information, see Firmware requirements for installing with virtual media.
- Providing a local or self-signed CA certificate for Baseboard Management Controllers (BMCs) when installing a cluster on bare metal
With this update, you can provide your own local or self-signed CA certificate to secure communication with BMCs when installing a cluster on bare metal. You can configure this certificate using the
platform.baremetal.bmcCACertparameter in the install-config.yaml file. If you do not use a trusted CA certificate, you can secure BMC communication by providing your own CA certificate. You can also configure a local or self-signed CA certificate after installation, whether the cluster was installed with a different BMC CA certificate or with no BMC CA certificate.For more information, see Additional installation configuration parameters and Configuring a local or self-signed Baseboard Management Controller CA certificate.
- Running firmware upgrades for hosts in deployed bare metal clusters (Generally Available)
For hosts in deployed bare metal clusters, you can update firmware attributes and the firmware image. As a result, you can run firmware upgrades and update BIOS settings for hosts that are already provisioned without fully deprovisioning them. Performing a live update to the
HostFirmwareComponents,HostFirmareSettings, orHostUpdatePolicyresource can be a destructive and destabilizing action. Perform these updates only after careful consideration.This feature was introduced in OpenShift Container Platform 4.18 with Technology Preview status. This feature is now supported as generally available in OpenShift Container Platform 4.21.
For more information, see Performing a live update to the HostFirmwareSettings resource, Performing a live update to the HostFirmwareComponents resource, and Setting the HostUpdatePolicy resource.
- Testing for Amazon Web Services m7 instance type
- As of OpenShift Container Platform 4.21, m7 instance types have been tested for installations on Amazon Web Services. For more information about tested instance types, see Tested instance types for AWS.
- Installing a cluster on Microsoft Azure with a user-provisioned DNS (Technology Preview)
You can enable a user-provisioned domain name server (DNS) instead of the default cluster-provisioned DNS solution. For example, your organization’s security policies might not allow the use of public DNS services such as Microsoft Azure DNS. You can manage your DNS only for the IP addresses of the API and Ingress servers. If you use this feature, you must provide your own DNS solution that includes records for
api.<cluster_name>.<base_domain>.and*.apps.<cluster_name>.<base_domain>.. Enabling a user-provisioned DNS is available as a Technology Preview feature.For more information, see Enabling a user-managed DNS and Provisioning your own DNS records.
1.3.10. Machine Config Operator Copy linkLink copied to clipboard!
- Boot image management for Azure and vSphere clusters promoted to GA
- Updating boot images has been promoted to GA for Microsoft Azure and VMware vSphere clusters. For more information, see Boot image management.
- Configuring image mode for OpenShift during installation is now supported
- You can now apply a custom layered image to your nodes during OpenShift Container Platform installation. For more information, see Applying a custom layered image during OpenShift Container Platform installation.
- Image mode for OpenShift status reporting improvements
-
The output of the
oc describe machineconfignodes <mcp_name>now contains anImageBuildDegradederror that indicates if an image mode for OpenShift failed. For more information, see About node status during updates. - Image mode for OpenShift status reporting improvements (Technology preview)
The
oc describe machineconfigpool <mcp_name>output, as a Technology Preview feature, now includes the following fields that report the status of machine config updates when image mode for OpenShift is enabled:-
Spec.ConfigImage.DesiredImage. This is the desired image for that node. -
Status.ConfigImage.CurrentImage. This is the current image on that node. -
Status.Conditions.ImagePulledFromRegistry. This reports whether an image is able to pull correctly in an image mode update.
-
For more information, see About node status during updates.
- Boot image management for control plane nodes is now supported (Technology Preview)
- Updating boot images is now supported as a Technology Preview feature for VMware vSphere clusters. This feature allows you to configure your cluster to update the node boot image whenever you update your cluster. Previously, updating boot images was supported for worker nodes. For more information, see Boot image management.
- Overriding storage or partition setup (Technology preview)
-
You can now use a
MachineConfigobject to change the installed disk partition schema, file systems, and RAID configurations for new nodes. Previously, for security reasons, you were blocked from changing these configurations from what was established during the cluster installation. For more information, see "Overriding storage and partition setup".
1.3.11. Machine management Copy linkLink copied to clipboard!
- Creating Google Cloud Spot VMs by using compute machine sets
With this release, OpenShift Container Platform supports deploying Machine API compute machines on Spot VMs in Google Cloud clusters. Google Cloud recommends using Spot VMs over their predecessor, preemptible VMs, because they include new features that preemptible VMs do not support.
For more information, see Machine sets that deploy machines as Spot VMs.
- Additional control plane machine set failure domain options for Azure
This release includes additional configuration options for control plane machine set failure domains on Microsoft Azure.
For more information, see Sample Azure failure domain configuration.
- Configuration of throughput for Amazon Web Services gp3 volumes on EBS devices
This release includes support for configuring the maximum throughput for gp3 drives on EBS devices for clusters installed on Amazon Web Services (AWS).
For more information, see Configuring storage throughput for gp3 drives (Machine API) and Configuring storage throughput for gp3 drives (Cluster API).
In OpenShift Container Platform version 4.21.1 and later, this feature also works with control plane machine sets.
- Bare-metal nodes on VMware vSphere clusters (Technology Preview)
- You can now add bare-metal compute machines to an existing OpenShift Container Platform cluster on vSphere. This capability enables you to migrate workloads to physical hardware without reinstalling the cluster. For instructions on adding these machines, see Adding bare-metal compute machines to a vSphere cluster.
1.3.12. Monitoring Copy linkLink copied to clipboard!
The monitoring stack documentation is now available as a separate documentation set. The 4.21 monitoring release notes are available at Release notes for OpenShift monitoring.
1.3.13. Networking Copy linkLink copied to clipboard!
- MetalLB Operator status reporting
You can now use enhanced MetalLB Operator reporting features to view real-time operational data for IP address allocation and Border Gateway Protocol (BGP) connectivity. Previously, viewing this information required manual log inspection across multiple controllers. With this release, you can monitor your network health and resolve connectivity issues directly through the following custom resources:
-
IPAddressPool: Monitor cluster-wide IP address allocation through thestatusfield to track usage and prevent address exhaustion. -
ServiceBGPStatus: Verify which service IP addresses are announced to specific BGP peers to ensure correct route advertisements. BGPSessionStatus: Check the real-time state of BGP and Bidirectional Forwarding Detection sessions to quickly identify connectivity drops.For more information, see Monitoring MetalLB configuration status.
-
- Applying unassisted holdover for boundary clocks and time synchronous clocks
OpenShift Container Platform 4.20 introduced unassisted holdover for boundary clocks and time synchronous clocks as a Technology Preview feature. This feature is now Generally Available (GA).
For more information, see Applying unassisted holdover for boundary clocks and time slave clocks.
- SR-IOV Operator supports ARM architecture
- The Single Root I/O Virtualization (SR-IOV) Operator can now communicate with ARM hardware. You can now complete tasks such as configure network cards that are already plugged into an ARM server and use these cards in your applications. For instructions on how to search for ARM hardware that the SR-IOV Operator supports, see About Single Root I/O Virtualization (SR-IOV) hardware networks.
- Support for Red Hat OpenShift Service Mesh version 3.2
- OpenShift Container Platform 4.21 updates Service Mesh to version 3.2. This version update incorporates essential CVE fixes and ensures that your OpenShift Container Platform instances receive the latest fixes, features, and enhancements. See the Service Mesh 3.2 release notes for more information.
- PTP Operator introduces GNSS-to-NTP failover for high-precision timing
With this release, the PTP Operator introduces an active GNSS-to-NTP failover configuration to ensure time synchronization continuity in environments requiring extremely high time accuracy.
When the primary Global Navigation Satellite System (GNSS) signal is lost or compromised, for example because of satellite jamming, the system automatically fails over to Network Time Protocol (NTP) to maintain time accuracy. When the GNSS signal is restored, the system automatically recovers back to using GNSS as the primary time source.
This feature is particularly important in telco environments that require high precision time synchronization with built-in redundancy. To enable GNSS to NTP failover, you configure the
PtpConfigresource with thentpfailoverplugin enabled and configure bothchronydandts2phcsettings.For more information, see Configuring GNSS failover to NTP for time synchronization continuity.
- Network policies for additional namespaces
- With this release, OpenShift Container Platform continues to deploy Kubernetes network policies to additional system namespaces to control ingress and egress traffic. It is anticipated that future releases might include network policies for additional system namespaces and Red Hat Operators.
- Ingress network flow analysis with the commatrix plugin
-
With this release, you can use the
commatrixplugin to generate ingress network flow data from your cluster. You can also use the plugin to identify any differences between open ports on the host and expected ingress flows for your environment.
For more information, see Ingress network flow analysis with the commatrix plugin
- Configure the dnsRecordsType parameter (Technology preview)
-
During cluster installation, you can specify the
dnsRecordsTypeparameter in theinstall-config.yamlfile to set if the internal DNS service or an external source provides the necessary records forapi,api-int, andingressDNS records. For more information about DNS requirements, see User-provisioned DNS requirements.
1.3.14. Nodes Copy linkLink copied to clipboard!
- Allocating specific GPUs to pods (DRA) is now generally available
- Attribute-Based GPU Allocation, which allows pods to request GPUs based on specific device attributes by using a Dynamic Resource Allocation (DRA) driver, is now generally available. For more information, see Allocating GPUs to Pods.
- The default
openshiftcluster image policy is now generally available The default
openshiftcluster image policy is now generally available and active by default. For more information, see Manage secure signatures with sigstore.If your OpenShift Container Platform 4.20 or earlier cluster has a cluster image policy named
openshift, the upgrade to OpenShift Container Platform marks the cluster as not updatable (Upgradeable=False) because of this defaultopenshiftcluster image policy. You must remove youropenshiftcluster image policy to clear theUpgradeable=Falsecondition and proceed with the update. You can optionally create your own cluster image policy with a different name before removing youropenshiftcluster image policy.- Support for sigstore BYOPKI is now generally available
- Support for using a certificate from your own public key infrastructure as a Sigstore root of trust is now generally available. For more information, see Manage secure signatures with sigstore.
- Automatically calculate and apply CPU and memory resources for system components
-
OpenShift Container Platform now automatically calculates and reserves a portion of the CPU and memory resources for use by the underlying node and system components. Previously, you needed to enable the feature by creating a
KubeletConfigcustom resource (CR) with theautoSizingReserved: trueparameter. For clusters updated to OpenShift Container Platform 4.21, you can enable the feature by deleting the50-worker-auto-sizing-disabledmachine config. After you delete the machine config, the nodes reboot with the new resource settings. If you manually configured system reserved CPU or memory resources, these settings remain upon update and do not change. For more information on this new feature, see Automatically allocating resources for nodes. - Linux PSI monitoring can now be enabled
-
You can now enable Linux Pressure Stall Information (PSI) monitoring, which makes PSI metrics for CPU, memory, and I/O available for your cluster, by using a
MachineConfigobject. For more information, see Enabling Pressure Stall Information (PSI) monitoring.
1.3.15. OpenShift CLI (oc) Copy linkLink copied to clipboard!
- Signature mirroring enabled by default for oc-mirror v2
-
With this update, the oc-mirror v2 plugin mirrors image signatures by default. This enhancement ensures that image integrity is automatically preserved during the mirroring process without requiring additional configuration. If your environment does not require signature validation, you can manually disable this feature by using the
--remove-signaturescommand-line flag. For more information, see Disabling signature mirroring for oc-mirror plugin v2.
1.3.16. Operator development Copy linkLink copied to clipboard!
- Supported Operator base images
With this release, the following base images for Operator projects are updated for compatibility with OpenShift Container Platform 4.21. The runtime functionality and configuration APIs for these base images are supported for bug fixes and for addressing CVEs.
- The base image for Ansible-based Operator projects
- The base image for Helm-based Operator projects
For more information, see Updating the base image for existing Ansible- or Helm-based Operator projects for OpenShift Container Platform 4.19 and later (Red Hat Knowledgebase).
1.3.17. Postinstallation configuration Copy linkLink copied to clipboard!
- Enabling hardware metrics monitoring on bare-metal clusters (Technology Preview)
With this update, you can enable your cluster to collect hardware metrics from the Redfish-compatible baseboard management controllers of your bare-metal nodes. Metrics include temperature, power consumption, fan status, and drive health. You enable this Technology Preview feature by enabling the Ironic Prometheus Exporter in your cluster as a postinstallation task.
For more information, see Hardware metrics in the Monitoring stack.
1.3.18. Scalability and performance Copy linkLink copied to clipboard!
- Pod-level IRQ affinity introduces housekeeping mode
For latency-sensitive workloads, you can now configure the
irq-load-balancing.crio.iopod annotation to usehousekeepingmode. This mode enables a subset of pinned CPUs to handle system interrupts while isolating the remaining pinned CPUs for latency-sensitive workloads. This reduces the overall CPU footprint by eliminating the need for dedicated housekeeping CPUs for IRQ handling. When you configurehousekeepingmode, the first pinned CPU and its thread siblings handle interrupts for the system.For more information, see Configuring interrupt processing for individual pods.
1.3.19. Storage Copy linkLink copied to clipboard!
- Volume Attributes Classes is generally available
Volume Attributes Classes provide a way for administrators to describe "classes" of storage they offer. Different classes might correspond to different quality-of-service levels. Volume Attributes Classes was introduced in OpenShift Container Platform 4.19, and is now generally available in 4.21.
Volume Attributes Classes is available only with AWS Elastic Block Storage (EBS) and Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI).
You can apply a Volume Attributes Classes to a persistent volume claim (PVC). If a new Volume Attributes Class becomes available in the cluster, you can update the PVC with the new Volume Attributes Classes if needed.
Volume Attributes Classes have parameters that describe volumes belonging to them. If a parameter is omitted, the default is used at volume provisioning. If a user applies the PVC with a different Volume Attributes Class with omitted parameters, the default value of the parameters might be used depending on the CSI driver implementation. For more information, see the related CSI driver documentation.
For more information, see Volume Attributes Classes.
- Azure File CSI supporting snapshots feature is generally available
A snapshot represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can be used to provision a new volume.
OpenShift Container Platform 4.17 introduced volume snapshot support for the Microsoft Azure File Container Storage Interface (CSI) Driver Operator as a Technology Preview feature. In 4.21, this feature is generally available. Also, Azure File snapshots now supports Network File System (NFS) in addition to Server Message Block (SMB).
For more information, see CSI drivers supported by OpenShift Container Platform and CSI volume snapshots.
- Azure File CSI supporting volume cloning feature is generally available
Volume cloning duplicates an existing persistent volume (PV) to help protect against data loss in OpenShift Container Platform. You can also use a volume clone just as you would use any standard volume.
OpenShift Container Platform 4.16 introduced volume cloning for the Microsoft Azure File Container Storage Interface (CSI) Driver Operator as a Technology Preview feature. In 4.21, this feature is generally available. Also, Azure File cloning now supports Network File System (NFS) in addition to Server Message Block (SMB).
For more information, see Azure File CSI Driver Operator and CSI volume cloning.
- oVirt CSI Driver Operator is removed from OpenShift Container Platform 4.21
- Red Hat Virtualization (RHV) as a host platform for OpenShift Container Platform was deprecated in version 4.14 and is no longer supported. In OpenShift Container Platform 4.21, the oVirt CSI Driver Operator is removed.
- CIFS/SMB CSI Driver Operator supports IBM Power
In OpenShift Container Platform 4.21, the CIFS/SMB CSI Driver Operator supports IBM Power (ppc64le).
For more information, see CIFS/SMB CSI Driver Operator.
- Introduction of new field to track the status of volume resize attempts
OpenShift Container Platform 4.19 introduced resizing recovery that stops the expansion controller from indefinitely attempting to expand a volume to an unsupported size request. This feature allows you to recover and provide another smaller resize value for the persistent volume claim (PVC). The new value must be larger than the original volume size.
OpenShift Container Platform 4.21 introduces the
pvc.Status.AllocatedResourceStatusfield, which shows the status of volume resize attempts. If a user changes the size of their PVCs, this new field allows resource quota to be tracked accurately.For more information about resizing volumes, see Expanding persistent volumes.
For more information about recovering when resizing volumes, see Recovering from failure when expanding volumes.
- Mutable CSI node allocatable property (Technical Preview)
This feature allows for dynamically updating the maximum number of storage volumes a node can handle. Without this feature, volume limits are essentially immutable when a node first joins the cluster. If the environment changes—for example, if you attach a new network interface (ENI) that shares a hardware "slot" with your storage—OpenShift Container Platform does not recognize it has fewer slots available for disks, leading to pods becoming stuck.
This feature is only supported on AWS Elastic Block Storage (EBS).
Mutable CSI node allocatable property is supported in OpenShift Container Platform 4.21 as a Technical Preview feature. To enable this feature, you need to enable Feature Gates.
For more information about enabling Technical Preview features, see Feature Gates.
- Reducing permissions while using the GCP PD CSI Driver Operator is generally available
The default installation allows the Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) Driver to impersonate any service account in the Google Cloud project. You can reduce the scope of permissions granted to the GCP PD CSI Driver service account in your Google Cloud project to only the required node service accounts.
For more information about this feature, see Reducing permissions while using the GCP PD CSI Driver Operator.
- Volume group snapshots API updated (Technical Preview)
The API for the Container Storage Interface (CSI) volume group snapshot feature is updated from
v1beta1tov1beta2.This feature is supported at the Technical Preview level.
For more information, see CSI volume group snapshots.
- Updated release of the Secrets Store CSI Driver Operator
- The Secrets Store CSI Driver Operator version v4.21 is now based on the upstream version v1.5.4 release of secrets-store-csi-driver.
1.3.20. Web console Copy linkLink copied to clipboard!
- OLM v1 software catalog in the web console (Technology Preview)
- With this update, you can preview the Operator Lifecycle Manager (OLM) v1 software catalog in the web console. Select Ecosystem → Software Catalog → Operators to preview this feature. To see the OLM (Classic) software catalog, click the Enable OLM v1 (Tech Preview) toggle.
1.4. Notable technical changes Copy linkLink copied to clipboard!
This section includes several technical changes for OpenShift Container Platform 4.21.
- VMware vSphere 7 and VMware Cloud Foundation 4 end of general support
- Broadcom has ended general support for VMware vSphere 7 and VMware Cloud Foundation (VCF) 4. If your existing OpenShift Container Platform cluster is running on either of these platforms, you must plan to migrate or upgrade your VMware infrastructure to a supported version. OpenShift Container Platform supports installation on vSphere 8 Update 1 or later, or VCF 5 or later.
- Bare Metal Operator has read-only root filesystem by default
-
As of this update, the Bare Metal Operator has the
readOnlyRootFilesystemsecurity context setting enabled to meet common hardening recommendations. - MachineSets advertise node architecture for autoscaling
- With this update, MachineSets advertise the architecture of their nodes, which allows the autoscaler to intelligently scale MachineSets in multi-architecture environments.
- Dynamic Resource Allocation plugin is disabled in the NUMA-aware scheduler
-
With this update, the Dynamic Resource Allocation (DRA) is disabled so that the secondary scheduler, managed by
numaresourcesoperator, does not need to handle DRA resources. The Kubernetes project has recently enabled DRA by default, setting specific expectations on the scheduler-plugins project to support watching DRA resources. As a result, the OpenShift topology-aware scheduler (TAS) disables DRA plugin in its profile, and therefore ignores DRA-related custom resources (CRs) when making scheduling decisions.
1.5. Deprecated and removed features Copy linkLink copied to clipboard!
1.5.1. Images deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Cluster Samples Operator | Deprecated | Deprecated | Deprecated |
1.5.2. Installation deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Deprecated | Deprecated | Deprecated |
|
CoreDNS wildcard queries for the | Deprecated | Deprecated | Deprecated |
|
| Deprecated | Deprecated | Deprecated |
|
| Deprecated | Deprecated | Deprecated |
|
| Deprecated | Deprecated | Deprecated |
| Package-based RHEL compute machines | Removed | Removed | Removed |
|
| Deprecated | Deprecated | Deprecated |
| Installing a cluster on AWS with compute nodes in AWS Outposts | Deprecated | Deprecated | Deprecated |
|
Deploying managed clusters using | Deprecated | Deprecated | Removed |
| Installing a cluster using Fujitsu iRMC drivers on bare-metal machines | General Availability | General Availability | Deprecated |
1.5.3. Machine Management deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Confidential Computing with AMD Secure Encrypted Virtualization for Google Cloud | General Availability | Deprecated | Deprecated |
| Managing bare-metal machines using Fujitsu iRMC drivers | General Availability | General Availability | Deprecated |
1.5.4. Networking deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| iptables | Deprecated | Deprecated | Deprecated |
1.5.5. Node deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Deprecated | Deprecated | Deprecated |
|
Kubernetes topology label | Deprecated | Deprecated | Deprecated |
|
Kubernetes topology label | Deprecated | Deprecated | Deprecated |
| cgroup v1 | Removed | Removed | Removed |
1.5.6. OpenShift CLI (oc) deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| oc-mirror plugin v1 | Deprecated | Deprecated | Deprecated |
| Docker v2 registries | General Availability | Deprecated | Deprecated |
1.5.7. Operator lifecycle and development deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Operator SDK | Removed | Removed | Removed |
| Scaffolding tools for Ansible-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Helm-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Go-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Hybrid Helm-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Java-based Operator projects | Removed | Removed | Removed |
| SQLite database format for Operator catalogs | Deprecated | Deprecated | Deprecated |
1.5.8. Storage deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Shared Resources CSI Driver Operator | Removed | Removed | Removed |
1.5.9. Web console deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Deprecated | Deprecated | Deprecated |
| Patternfly 4 | Removed | Removed | Removed |
1.5.10. Workloads deprecated and removed features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Deprecated | Deprecated | Deprecated |
1.6. Deprecated features Copy linkLink copied to clipboard!
- Deprecation of Fujitsu Integrated Remote Management Controller (iRMC) driver for bare-metal machines
As of OpenShift Container Platform 4.21, support for the Fujitsu iRMC baseboard management controller (BMC) driver has been deprecated and will be removed in a future release. If a
BareMetalHostresource contains a BMC address withirmc://as its URI scheme, the resource must be updated to use another BMC scheme, such asredfish://oripmi://. Once support for this driver is removed, hosts that useirmc://URI schemes will become unmanageable.For information about updating the
BareMetalHostresource, see Editing a BareMetalHost resource.
1.7. Removed features Copy linkLink copied to clipboard!
This section includes removed features for OpenShift Container Platform 4.21.
- Swap memory support has been removed
- The ability to configure swap memory is no longer available in OpenShift Container Platform because using swap memory can prevent a node from properly safeguarding itself from failures.
1.8. Fixed issues Copy linkLink copied to clipboard!
The following issues are fixed for this release:
1.8.1. Installer Copy linkLink copied to clipboard!
-
Before this update, the vSphere platform configuration lacked a validation check to prevent the simultaneous definition of both a custom virtual machine template and a
clusterOSImageparameter. As a consequence, users could provide both parameters in the installation configuration, leading to ambiguity and potential deployment failures. With this release, the vSphere validation logic has been updated to ensure that template andclusterOSImageparameters are treated as mutually exclusive, returning a specific error message if both fields are populated. (OCPBUGS-63584) - Before this update, a race condition occurred when multiple reconciliation loops or concurrent processes attempted to add a virtual machine (VM) to a vSphere Host Group simultaneously due to the provider lacking a check to see if the VM was already a member. Consequently, the vSphere API could return errors during the cluster reconfiguration task, leading to reconciliation failures and preventing the VM from being correctly associated with its designated zone or host group. With this release, the zonal logic has been updated to verify the VM’s membership within the target host group before initiating a reconfiguration task, ensuring the operation is only performed if the VM is not already present. (OCPBUGS-60765)
1.8.2. Kube Controller Manager Copy linkLink copied to clipboard!
-
Before this update,
relatedObjectsin theClusterOperatorCustom Resource object omitted theClusterRoleBindingobject, resulting in incomplete debugging information. With this release, the related objects forClusterOperatorare expanded to includeClusterRoleBinding. As a result, thekube-controller-managerClusterRoleBindingis included in theClusterOperatoroutput. (OCPBUGS-65502)
1.8.3. Kube Scheduler Copy linkLink copied to clipboard!
-
Before this update, the
ClusterOperatorobject failed to referenceClusterRoleBindingin therelatedObjectsarray, and caused theKube-schedulerClusterRoleBindingto be omitted in theClusterOperatoroutput. As a consequence, debugging difficulties occurred. With this release, theClusterRoleBindingfor thekube-schedulerOperator is added to theClusterOperatorrelatedObjects. (OCPBUGS-65503)
1.8.4. Networking Copy linkLink copied to clipboard!
-
Before this update, an incorrect private key containing certificate data caused HAProxy reload failure in OpenShift Container Platform 4.14. As a consequence, incorrect certificate configuration caused HAProxy router pods to fail reloads, which led to a partial outage. With this release,
haproxynow validates certificates. As a result, router reload failures with invalid certificates are prevented. (OCPBUGS-49769) - To maintain compatibility with Kubernetes 1.34, CoreDNS has been updated to version 1.13.1. This update resolves intermittent DNS resolution issues reported in previous versions and includes upstream performance fixes and security patches.
1.8.5. Clock state metrics degrade correctly after upstream clock loss Copy linkLink copied to clipboard!
-
Previously, when the upstream clock connection was lost and the clock entered the
unlockedstate, theptp4landts2phcclock state metrics did not degrade as expected. This behavior caused inconsistent time synchronization state reporting. This issue has been fixed. When the upstream clock connection is lost, theptp4landts2phcclock state metrics now degrade correctly, providing consistent time synchronization state reporting.
1.8.6. Node Tuning Operator Copy linkLink copied to clipboard!
-
Before this update, the Performance Profile Creator tool failed to analyze a
must-gatherarchive if the archive contained a custom namespace directory ending with the suffixnodes. With this release, the PPC now correctly excludes thenamespacesdirectory when processing the must-gather data to create a suggestedPerformanceProfile. (OCPBUGS-60218)
1.8.7. OpenShift API Server Copy linkLink copied to clipboard!
-
Before this update, there was an incorrect error field path for
hostUserswhen validating a security context constraint. It was shown as.spec.securityContext.hostUsers, buthostUsersis not in the security context. With this release, the error field path is now.spec.hostUsers. (OCPBUGS-65727)
1.9. Technology Preview features status Copy linkLink copied to clipboard!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
Technology Preview Features Support Scope
In the following tables, features are marked with the following statuses:
- Not Available
- Technology Preview
- General Availability
- Deprecated
- Removed
1.9.1. Authentication and authorization Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Pod security admission restricted enforcement | Technology Preview | Technology Preview | Technology Preview |
| Direct authentication with an external OIDC identity provider | Technology Preview | General Availability | General Availability |
1.9.2. Edge computing Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Accelerated provisioning of GitOps ZTP | Technology Preview | Technology Preview | Technology Preview |
| Enabling disk encryption with TPM and PCR protection | Technology Preview | Technology Preview | Technology Preview |
| Configuring a local arbiter node | Technology Preview | General Availability | General Availability |
| Configuring a two-node OpenShift Container Platform cluster with fencing | Not Available | Technology Preview | Technology Preview |
1.9.3. Extensions Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Operator Lifecycle Manager (OLM) v1 | General Availability | General Availability | General Availablity |
| OLM v1 runtime validation of container images using sigstore signatures | Technology Preview | Technology Preview | Technology Preview |
| OLM v1 permissions preflight check for cluster extensions | Technology Preview | Technology Preview | Technology Preview |
| OLM v1 deploying a cluster extension in a specified namespace | Technology Preview | Technology Preview | Technology Preview |
| OLM v1 deploying a cluster extension that uses webhooks | Not Available | Technology Preview | General Availability |
| OLM v1 software catalog | Not Available | Not Available | Technology Preview |
1.9.4. Installation Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Adding kernel modules to nodes with kvc | Technology Preview | Technology Preview | Technology Preview |
| Enabling NIC partitioning for SR-IOV devices | General Availability | General Availability | General Availability |
| User-defined labels and tags for Google Cloud | General Availability | General Availability | General Availability |
| Installing a cluster on Alibaba Cloud by using Assisted Installer | Technology Preview | Technology Preview | Technology Preview |
| Installing a cluster on Microsoft Azure with confidential VMs | General Availability | General Availability | General Availability |
| Dedicated disk for etcd on Microsoft Azure | Not Available | Technology Preview | Technology Preview |
| Mount shared entitlements in BuildConfigs in RHEL | Technology Preview | Technology Preview | Technology Preview |
| OpenShift zones support for vSphere host groups | Technology Preview | Technology Preview | Technology Preview |
| Selectable Cluster Inventory | Technology Preview | Technology Preview | Technology Preview |
| Installing a cluster on Google Cloud using the Cluster API implementation | General Availability | General Availability | General Availability |
| Enabling a user-provisioned DNS on Google Cloud | Technology Preview | Technology Preview | General Availability |
| Enabling a user-provisioned DNS on Microsoft Azure | Not Available | Not Available | Technology Preview |
| Enabling a user-provisioned DNS on Amazon Web Services (AWS) | Not Available | Not Available | Technology Preview |
| Installing a cluster using Google Cloud private and restricted API endpoints | Not Available | Not Available | General Availability |
| Installing a cluster on VMware vSphere with multiple network interface controllers | Technology Preview | General Availability | General Availability |
| Using bare metal as a service | Technology Preview | Technology Preview | Technology Preview |
| Running firmware upgrades for hosts in deployed bare metal clusters | Technology Preview | Technology Preview | General Availability |
| Changing the CVO log level | Not Available | Technology Preview | Technology Preview |
1.9.5. Machine Config Operator Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Boot image management for Azure and vSphere | Not available | Technology Preview | General Availability |
| Boot image management for control plane nodes | Not available | Not available | Technology Preview |
| image mode for OpenShift status reporting improvements | Not available | Not available | Technology Preview |
| Overriding storage or partition setup | Not available | Not available | Technology Preview |
1.9.6. Machine management Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Managing machines with the Cluster API for Amazon Web Services | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for Google Cloud | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for IBM Power® Virtual Server | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for Microsoft Azure | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for RHOSP | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for VMware vSphere | Technology Preview | Technology Preview | Technology Preview |
| Managing machines with the Cluster API for bare metal | Technology Preview | Technology Preview | Technology Preview |
| Cloud controller manager for IBM Power® Virtual Server | Technology Preview | Technology Preview | Technology Preview |
| Adding multiple subnets to an existing VMware vSphere cluster by using compute machine sets | Technology Preview | Technology Preview | Technology Preview |
| Configuring Trusted Launch for Microsoft Azure virtual machines by using machine sets | General Availability | General Availability | General Availability |
| Configuring Azure confidential virtual machines by using machine sets | General Availability | General Availability | General Availability |
| Bare-metal nodes on VMware vSphere clusters | Not Available | Not Available | Technology Preview |
1.9.7. Multi-Architecture Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Technology Preview | General Availability | General Availability |
|
| Technology Preview | General Availability | General Availability |
|
| Technology Preview | General Availability | General Availability |
| Support for configuring the image stream import mode behavior | Technology Preview | Technology Preview | Technology Preview |
1.9.8. Networking Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| eBPF manager Operator | Technology Preview | Technology Preview | Technology Preview |
| Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses | Technology Preview | Technology Preview | Technology Preview |
| Updating the interface-specific safe sysctls list | Technology Preview | Technology Preview | Technology Preview |
| Egress service custom resource | Technology Preview | Technology Preview | Technology Preview |
|
VRF specification in | Technology Preview | Technology Preview | Technology Preview |
|
VRF specification in | General Availability | General Availability | General Availability |
| Host network settings for SR-IOV VFs | General Availability | General Availability | General Availability |
| Integration of MetalLB and FRR-K8s | General Availability | General Availability | General Availability |
| Automatic leap seconds handling for PTP grandmaster clocks | General Availability | General Availability | General Availability |
| PTP events REST API v2 | General Availability | General Availability | General Availability |
|
OVN-Kubernetes customized | General Availability | General Availability | General Availability |
|
OVN-Kubernetes customized | Technology Preview | Technology Preview | Technology Preview |
| Live migration to OVN-Kubernetes from OpenShift Container Platform SDN | Not Available | Not Available | Not Available |
| User-defined network segmentation | General Availability | General Availability | General Availability |
| Dynamic configuration manager | Technology Preview | Technology Preview | Technology Preview |
| SR-IOV Network Operator support for Intel C741 Emmitsburg Chipset | Technology Preview | Technology Preview | Technology Preview |
| SR-IOV Network Operator support on ARM architecture | General Availability | General Availability | General Availability |
| Gateway API and Istio for Ingress management | General Availability | General Availability | General Availability |
| Dual-port NIC for PTP ordinary clock | Technology Preview | Technology Preview | Technology Preview |
| DPU Operator | Technology Preview | Technology Preview | Technology Preview |
| Fast IPAM for the Whereabouts IPAM CNI plugin | Technology Preview | Technology Preview | Technology Preview |
| Unnumbered BGP peering | Technology Preview | General Availability | General Availability |
| Load balancing across the aggregated bonded interface with xmitHashPolicy | Not Available | Technology Preview | Technology Preview |
| PF Status Relay Operator for high availability with SR-IOV networks | Not Available | Technology Preview | Technology Preview |
| Preconfigured user-defined network end points using MTV | Not Available | Technology Preview | Technology Preview |
| Unassisted holdover for PTP devices | Not Available | Technology Preview | General Availability |
1.9.9. Node Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
|
| Technology Preview | Technology Preview | Technology Preview |
| sigstore support | Technology Preview | General Availability | General Availability |
|
Default sigstore | Technology Preview | Technology Preview | General Availability |
| Linux user namespace support | Technology Preview | General Availability | General Availability |
| Attribute-Based GPU Allocation | Not Available | Technology Preview | General Availability |
1.9.10. OpenShift CLI (oc) Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| oc-mirror plugin v2 | General Availability | General Availability | General Availability |
| oc-mirror plugin v2 enclave support | General Availability | General Availability | General Availability |
| oc-mirror plugin v2 delete functionality | General Availability | General Availability | General Availability |
1.9.11. Operator lifecycle and development Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Operator Lifecycle Manager (OLM) v1 | General Availability | General Availability | General Availability |
| Scaffolding tools for Hybrid Helm-based Operator projects | Removed | Removed | Removed |
| Scaffolding tools for Java-based Operator projects | Removed | Removed | Removed |
1.9.12. Red Hat OpenStack Platform (RHOSP) Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| RHOSP integration into the Cluster CAPI Operator | Technology Preview | Technology Preview | Technology Preview |
| Hosted control planes on RHOSP 17.1 | Technology Preview | Technology Preview | Technology Preview |
1.9.13. Scalability and performance Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| factory-precaching-cli tool | Technology Preview | Technology Preview | Technology Preview |
| Hyperthreading-aware CPU manager policy | Technology Preview | Technology Preview | Technology Preview |
| Mount namespace encapsulation | Technology Preview | Technology Preview | Technology Preview |
| Node Observability Operator | Technology Preview | Technology Preview | Technology Preview |
| Increasing the etcd database size | Technology Preview | Technology Preview | Technology Preview |
|
Managing etcd size by setting the | Not available | Not available | Technology Preview |
|
Using RHACM | General Availability | General Availability | General Availability |
| Pinned Image Sets | Technology Preview | Technology Preview | Technology Preview |
| Configuring NUMA-aware scheduler replicas and high availability | Not available | Technology Preview | Technology Preview |
1.9.14. Storage Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| AWS EFS One Zone volume | Not Available | General Availability | General Availability |
| Automatic device discovery and provisioning with Local Storage Operator | Technology Preview | Technology Preview | Technology Preview |
| Azure File CSI cloning support | Technology Preview | Technology Preview | General Availability |
| Azure File CSI snapshot support | Technology Preview | Technology Preview | General Availability |
| Azure File cross-subscription support | General Availability | General Availability | General Availability |
| Azure Disk performance plus | Not Available | General Availability | General Availability |
| Configuring fsGroupChangePolicy per namespace | Not Available | General Availability | General Availability |
| Shared Resources CSI Driver in OpenShift Builds | Technology Preview | Technology Preview | Technology Preview |
| Secrets Store CSI Driver Operator | General Availability | General Availability | General Availability |
| CIFS/SMB CSI Driver Operator | General Availability | General Availability | General Availability |
| VMware vSphere multiple vCenter support | General Availability | General Availability | General Availability |
| Disabling/enabling storage on vSphere | General Availability | General Availability | General Availability |
| Increasing max number of volumes per node for vSphere | Technology Preview | Technology Preview | Technology Preview |
| RWX/RWO SELinux mount option | Developer Preview | Technology Preview | Technology Preview |
| Migrating CNS Volumes Between Datastores | General Availability | General Availability | General Availability |
| CSI volume group snapshots | Technology Preview | Technology Preview | Technology Preview |
| GCP PD supports C3/N4 instance types and hyperdisk-balanced disks | General Availability | General Availability | General Availability |
| OpenStack Manila support for CSI resize | General Availability | General Availability | General Availability |
| Volume Attribute Classes | Technology Preview | Technology Preview | General Availability |
| Volume populators | Technology Preview | General Availability | General Availability |
1.9.15. Web console Technology Preview features Copy linkLink copied to clipboard!
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Red Hat OpenShift Lightspeed in the OpenShift Container Platform web console | Technology Preview | Technology Preview | Technology Preview |
1.10. Known issues Copy linkLink copied to clipboard!
This section includes several known issues for OpenShift Container Platform 4.21.
- Currently, due to a known issue, the OpenShift Container Platform 4.21 versions of the Cluster Resource Override Operator and the DPU Operator will be available in an upcoming 4.21 maintenance release. (OCPBUGS-74224)
If you mirrored the OpenShift Container Platform release images to the registry of a disconnected environment by using the
oc adm release mirrorcommand, the release image Sigstore signature is not mirrored with the image.This has become an issue in OpenShift Container Platform 4.21, because the
openshiftcluster image policy is now deployed to the cluster by default. This policy causes CRI-O to automatically verify the Sigstore signature when pulling images into a cluster. (OCPBUGS-70297)With the absence of the Sigstore signature, after updating to OpenShift Container Platform 4.21 on a disconnected environment, future Cluster Version Operator pods might fail to run. You can avoid this problem by installing the oc-mirror plugin v2 and using the
oc mirrorcommand to mirror the OpenShift Container Platform release image. The oc-mirror plugin v2 mirrors both the release image and its Sigstore signature from your mirror registry to your disconnected environment.If you cannot use the oc-mirror plugin v2, you can use the
oc image mirrorcommand to mirror the Sigstore signature into your mirror registry by using a command similar to the following:oc image mirror "quay.io/openshift-release-dev/ocp-release:${RELEASE_DIGEST}.sig" "${LOCAL_REGISTRY}/${LOCAL_RELEASE_IMAGES_REPOSITORY}:${RELEASE_DIGEST}.sig"$ oc image mirror "quay.io/openshift-release-dev/ocp-release:${RELEASE_DIGEST}.sig" "${LOCAL_REGISTRY}/${LOCAL_RELEASE_IMAGES_REPOSITORY}:${RELEASE_DIGEST}.sig"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
RELEASE_DIGEST-
Specifies your digest image with the
:character replaced by a-character. For example:sha256:884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdeebecomessha256-884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdee.sig.
For information on the oc-mirror v2 plugin, see Mirroring images for a disconnected installation by using the oc-mirror plugin v2.
-
Starting with OpenShift Container Platform 4.21, a decrease exists in the default maximum open files soft limit for containers. As a consequence, end users might experience application failures. To work around this issue, increase the container runtimes (CRI-O) ulimit configuration by using a method of your choice, such as the
ulimitcommand. Note that if you upgrade your cluster from OpenShift Container Platform 4.20 to 4.21, the existing maximum open files limit is retained. (OCPBUGS-62095) - Currently, on clusters with SR-IOV network virtual functions configured, a race condition might occur between system services responsible for network device renaming and the TuneD service managed by the Node Tuning Operator. As a consequence, the TuneD profile might become degraded after the node restarts, leading to performance degradation. As a workaround, restart the TuneD pod to restore the profile state. (OCPBUGS-41934)
-
Currently, pods that use a
guaranteedQoS class and request whole CPUs might not restart automatically after a node reboot or kubelet restart. The issue might occur in nodes configured with a static CPU Manager policy and using thefull-pcpus-onlyspecification, and when most or all CPUs on the node are already allocated by such workloads. As a workaround, manually delete and re-create the affected pods. (OCPBUGS-43280) -
On systems using specific AMD EPYC processors, some low-level system interrupts, for example
AMD-Vi, might contain CPUs in the CPU mask that overlaps with CPU-pinned workloads. This behavior is because of the hardware design. These specific error-reporting interrupts are generally inactive and there is currently no known performance impact.(OCPBUGS-57787) While Day 2 firmware updates and BIOS attribute reconfiguration for bare-metal hosts are generally available with this release, the Bare Metal Operator (BMO) does not provide a native mechanism to cancel a firmware update request once initiated. If a firmware update or setting change for
HostFirmwareComponentsorHostFirmwareSettingsresources fails, returns an error, or becomes indefinitely stuck, you can try to recover by using the following steps:-
Removing the changes to the
HostFirmwareComponentsandHostFirmwareSettingsresources. -
Setting the node to
online: falseto trigger a reboot. - If the issue persists, deleting the Ironic pod.
A native abort capability for servicing operations might be planned for a future release.
-
Removing the changes to the
- There is a known issue with the ability to configure the maximum throughput of gp3 storage volumes in an AWS cluster. This feature does not work with control plane machine sets. There is no workaround for this issue, but it is fixed in 4.21.1. (OCPBUGS-74478)
- When installing a private cluster on Google Cloud behind a proxy with user-provisioned DNS, you might encounter installation errors indicating the bootstrap failed to complete or the cluster initialization failed. In both cases, the installation can succeed, resulting in a healthy cluster. As a workaround, install the private cluster on a bastion host that is within the same virtual private cloud (VPC) as the cluster to be deployed. (OCPBUGS-54901)
1.11. Asynchronous errata updates Copy linkLink copied to clipboard!
Security, bug fix, and enhancement updates for OpenShift Container Platform 4.21 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.21 errata is available on the Red Hat Customer Portal. See the OpenShift Container Platform Life Cycle for more information about asynchronous errata. Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate.
This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.21. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.21.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow.
For any OpenShift Container Platform release, always review the instructions on updating your cluster properly.
1.11.1. RHBA-2026:2637 - OpenShift Container Platform 4.21.2 fixed issues Copy linkLink copied to clipboard!
Issued: 17 February 2026
OpenShift Container Platform release 4.21.2 is now available. The list of bug fixes that are included in the update is documented in the RHBA-2026:2637 advisory. The RPM packages that are included in the update are provided by the RHBA-2026:2630 advisory.
Space precluded documenting all of the container images for this release in the advisory.
You can view the container images in this release by running the following command:
oc adm release info 4.21.2 --pullspecs
$ oc adm release info 4.21.2 --pullspecs
1.11.1.1. Enhancements Copy linkLink copied to clipboard!
This release contains the following enhancements:
-
With this update, the application addresses a denial of service vulnerability in the Logrus library, specifically when handling large single-line payloads without newline characters. The new feature ensures that the Logrus library, when used in our application, does not fail to process large input data because of the
token too longerror. By implementing this fix, applications using Logrus do not experience unavailability due to this issue. This improvement is available in Logrus versions 1.8.3, 1.9.1, and 1.9.3 and later. (OCPBUGS-74282)
1.11.1.2. Fixed issues Copy linkLink copied to clipboard!
The following issues are fixed for this release:
-
Before this update, the
cluster-node-tuning-operatorobject used a hard-coded client certificate authority source that failed on HyperShift. As a consequence, thecluster-node-tuning-operatorpod failed to listen on the 60000 port. With this release, thecluster-node-tuning-operatorobject listens on the 60000 port and the inactive target is resolved. (OCPBUGS-55399) -
Before this update,
kubevirtobject virtual machine (VM) eviction strategy was not set toexternalduring a node drain process. With this release, the eviction strategy forkubevirtVMs is customizable. As a result,kubevirtVMs in hosted control planes correctly use the specified eviction strategy, ensuring smooth node draining during infrastructure cluster upgrades. (OCPBUGS-58397) - Before this update, the agent did not compare the physical and usable RAM sizes correctly on certain virtual machines (VM), causing discrepancies. With this release, the incorrect RAM size calculation on certain VMs is fixed and uses a consistent method for all cases. As a result, the RAM calculation accuracy is improved, ensuring correct host memory reporting for users. (OCPBUGS-66374)
-
Before this update, users attempted to access unauthorized API resources on vSphere clusters, and caused a
500error when accessing metrics. With this release, the 8445 port issue on vSphere is fixed and returns a401 forbiddenresponse, improving metrics access in vSphere environments. (OCPBUGS-74569) - Before this update, autoscaling imbalance occurred because of ignored labels not added to worker nodes. As a consequence, nodes in 3 pools scaled unevenly, and caused workload imbalance. With this release, the autoscaler ignores certain labels for even distribution in node pools. As a result, nodes in 3 pools scale up more evenly, improving the distribution of the workload. (OCPBUGS-74893)
- Before this update, the Subscription details list page was empty because of a missing code fix in the release team build cycle. With this release, the Subscription details list page is populated, improving user navigation. (OCPBUGS-74998)
-
Before this update, the
collect-profilesjob was affected by maintenance difficulties, causing confusion for customers and support engineers during troubleshooting. With this release, thecollect-profilesjob is removed, improving user experience and reducing maintenance effort. (OCPBUGS-76266) - Before this update, the upgrade to OpenShift Container Platform 4.21.0 enforced short name mode, and caused image pull failures with multiple sources. As a consequence, end users experienced image pull failure due to enforced short name mode. With this release, short name mode is disabled in OpenShift Container Platform 4.21.0-0.nightly-2026-02-10-112229 and resolves the image pull failure issue. As a result, short name mode is disabled, preventing CRI-O failures when pulling images with an unqualified search. (OCPBUGS-76356)
1.11.1.3. Updating Copy linkLink copied to clipboard!
To update an OpenShift Container Platform 4.21 cluster to this latest release, see Updating a cluster using the CLI.
1.11.2. RHSA-2026:2129 - OpenShift Container Platform 4.21.1 fixed issues and security update Copy linkLink copied to clipboard!
Issued: 10 February 2026
OpenShift Container Platform release 4.21.1 is now available. The list of bug fixes that are included in the update is documented in the RHSA-2026:2129 advisory. The RPM packages that are included in the update are provided by the RHSA-2026:2082 advisory.
Space precluded documenting all of the container images for this release in the advisory.
You can view the container images in this release by running the following command:
oc adm release info 4.21.1 --pullspecs
$ oc adm release info 4.21.1 --pullspecs
1.11.2.1. Enhancements Copy linkLink copied to clipboard!
This release contains the following enhancements:
-
With this update, dual-stack networking support is added for deploying a hosted control plane on OpenShift Container Platform Virtualization with
Kubevirt. The Cluster Network Operator (CNO) now recognizesKubevirtas a supported platform for dual-stack, which enables the successful deployment of hosted control plane with IPv4/IPv6 dual-stack networking. This enhancement ensures a smoother deployment process for dual-stack networking configurations. (OCPBUGS-69941) -
With this update, the API request limits in the
csi-snapshot-controllerare increased, which addresses low limits that caused throttling during snapshot processing. This enhancement ensures smoother and more efficient handling of snapshot operations, which improves the scalability and performance of thecsi-snapshot-controller. (OCPBUGS-72391)
1.11.2.2. Known issues Copy linkLink copied to clipboard!
This release contains the following known issues:
-
If you try to use the OpenShift Container Platform web console to create a
Kueuecustom resource (CR) by using the form view, the web console shows an error and the resource cannot be created. As a workaround, use the YAML view to create aKueueCR instead. (OCPBUGS-58118) - When you select Ecosystem → Software Catalog in the unified software catalog view of the web console, you must enter an existing project name or create a new project to view the software catalog. The project selection field does not affect how catalog content is installed on the cluster. As a workaround, enter any existing project name to view the software catalog. (OCPBUGS-61870)
- Starting with OpenShift Container Platform 4.21, there is a decrease in the default maximum open files soft limit for containers. As a consequence, you might experience application failures. To work around this problem, workloads can set their ulimit inside of the container up to the new defaulted hard limit of 524288. (OCPBUGS-62327)
-
Before this update, event logs for the Global Network Resource-Device (GNR-D) interfaces were ambiguous due to identical three-letter prefixes ("eno"). As a consequence, affected interfaces were not clearly identified during state changes. With this release, the interfaces used by the
ptp-operatorare changed to follow the "path" naming convention, which ensures that per clock events are identified correctly based on interface names and clearly indicate which clock is affected by state changes. (OCPBUGS-62817) -
Using a Telecom Time Synchronous Clock (T-TSC) configuration causes the ts2phc metrics to report "unlocked" instead of "locked". As a result, you might encounter inaccurate Precision Time Protocol (PTP) clock state reporting. To work around this issue, remove the
ts2phcmetric. (OCPBUGS-63158) - The PTP Daemon failed to lock after reboot due to an initial offset convergence issue, which caused long reboot times for the PTP Daemon and the clock to never lock. This issue is planned to be fixed in a later release. (OCPBUGS-66252)
1.11.2.3. Fixed issues Copy linkLink copied to clipboard!
The following issues are fixed for this release:
-
Before this update, the HyperShift CLI instantiated Microsoft Azure SDK clients without passing cloud configuration options, which caused all clients to default to Azure Public Cloud. As a consequence, hosted clusters that were created or managed in Azure Government Cloud or Azure China Cloud failed because the Azure SDK clients could not connect to the correct cloud endpoints. With this release, a
GetAzureCloudConfiguration()helper function is added to convert cloud names to Azure SDK cloud configurations. All Azure SDK client instantiations are also updated across 15 locations in the HyperShift CLI andcontrol-plane-operatorto use proper cloud configuration fromHostedCluster.Spec.Platform.Azure.Cloud(for cluster commands andcontrol-plane-operator). As a result, the HyperShift CLI andcontrol-plane-operatorcorrectly support creating and managing hosted clusters in Azure Government Cloud and Azure China Cloud in addition to Azure Public Cloud. (OCPBUGS-33372) - Before this update, alerts in the project overview were not visible because the application was querying an incorrect API. With this release, the application now queries the correct API and displays the project alerts. (OCPBUGS-33879)
- Before this update, the bootstrap logs could not be collected as the SSH connection was dropped because a security-group rule for the bootstrap machine was missing. With this release, an additional security-group is added to the bootstrap machine to enable SSH connection and to enable log collection. (OCPBUGS-34950)
-
Before this update, the
hostedcontrolplanecontroller crashed when thehcp.Spec.Platform.AWS.CloudProviderConfig.Subnet.IDparameter was undefined because the code accessed theconfig.Subnet.IDparameter without first checking if theconfig.Subnetvalue was nil. As a consequence, the control plane`HostedControlPlane` resourceer before accessing theSubnet.IDparameter and the check usesptr.Deref(config.Subnet.ID, "")for safe dereferencing. As a result, the control plane Operator no longer crashes when theCloudProviderConfig.Subnetparameter is not specified. Instead, the control plane Operator uses an empty string for thesubnet IDparameter to gracefully handle the missing field. (OCPBUGS-38358) -
Before this update, the controller created and deleted a file with a random name when it was setting up authentication to Amazon Web Services (AWS). As a consequence, the controller continuously allocated more memory. With this release, the same file name is used instead of a random file name. As a result, the kernel re-uses the
dentryinstead of requesting a new one for each file. (OCPBUGS-38759) -
Before this update, if the
ovnkube-controlleron a node failed to process updates and configure its local OVN database, the OVN-controller could connect to this stale database. This caused the OVN-controller to consume outdated EgressIP configurations and send incorrect Gratuitous ARPs (GARPs) for an IP address that might have already moved to a different node. With this release, the OVN-Controller is blocked from sending these GARPs during the time when theovnkube-controlleris not processing updates. As a result, network disruptions are prevented by ensuring GARPs are not sent based on stale database information. (OCPBUGS-42303) -
Before this update, the CA certificate reference to
HelmChartRepositorycaused chart installation failure. As a consequence, Helm chart installation failed because it could not find the chart. With this release, the issue has been fixed. As a result, the CA certificate configuration no longer breaks the Helm chart installation. (OCPBUGS-44235) -
Before this update, if NetworkManager was restarted or crashed on a node with a
br-exinterface managed by NMState, the node lost network connectivity. With this release, a fallback check in the dispatcher script was added to detect NMState-managedbr-exinterfaces by checking for thebr-ex-brbridge ID when the standardbr-exbridge ID is not found. As a result, nodes with this interface type do not lose network connectivity when NetworkManager restarts or crashes. (OCPBUGS-54682) -
Before this update, unintended executable permissions were set on generated files during mirror operation which caused potential unexpected behavior in user workflows. With this release, the unintended executable permissions are removed from generated files during the
oc-mirrorv2 operation. As a result, the stability of theoc-mirrortool is improved. (OCPBUGS-55489) - Before this update, the operand details page would incorrectly show information using only half of the screen’s viewport. With this release, the operand details take up the full page width as expected. (OCPBUGS-55746)
-
Before this update, concurrent map iteration and write in the
kube-apiservercaused crashes on audit log events. As a consequence, when an API server crashed, the other API servers were stormed with kubelet LIST/WATCH requests that caused disruptions. With this release, the concurrent map iteration and write error has been resolved. As a result, API disruptions are prevented. (OCPBUGS-56594) -
Before this update, setting an invalid certificate secret name in the service annotation
service.beta.openshift.io/serving-cert-secret-nameparameter caused the service certificate authority (CA) Operator to hotloop. With this release, the Operator stops retrying to create the secret after 10 tries. The number of retries cannot be changed. (OCPBUGS-56599) -
Before this update, the Microsoft Azure machine provider was not passing the
dataDisksconfiguration from the machine set into the virtual machine creation API request for the Azure Stack Hub. As a consequence, new machines were created without the specified data disks because the configuration was silently ignored during the VM creation process. With this release, the VM creation for the Azure Stack Hub is updated to include thedataDisksconfiguration. An additional update manually implements the behavior of thedeletionPolicy: Deleteparameter in the controller because the Azure Stack Hub does not natively support this option. As a result, data disks are correctly provisioned on the Azure Stack Hub VMs. TheDeletepolicy is also functionally supported, which ensures that disks are properly removed when their machines are removed. (OCPBUGS-56664) -
Before this update, the
oc admmust-gathervolume checker script assumed that the directory was empty. As a consequence, if the directory was initially not empty, the node disk size was incorrectly calculated. With this release, the size of directory is always correctly calculated. (OCPBUGS-56691) -
Before this update, when the
LookupDefaultOCPVersionfunction was called without a specified release stream, the system might have tried to use a multi-architecture OpenShift Container Platform version that was newer than what the currently installed HyperShift Operator supported. As a consequence, potential compatibility issues occurred. With this release, the logic for determining the default OpenShift Container Platform version is updated so that it consults the supported-versions config map to identify the latest stable multi-architecture OpenShift Container Platform release image that is compatible with the HyperShift Operator. As a result, when no release stream is provided, the system defaults to an OpenShift Container Platform version that is guaranteed to be supported by the installed HyperShift Operator, which prevents compatibility problems. (OCPBUGS-56701) -
Before this update, any unrelated changes to a
netpolresource triggered a full reconcile of the object, including deleting and re-adding rules. With this release, anetpolobject fully reconciles when required. Otherwise, the object reconciliation is skipped. (OCPBUGS-56749) -
Before this update, the installation did not fail if you annotated the
AgentClusterInstallmanifest with incorrect edits to theinstall-config.yamlfile overrides. Instead, the installation continued to proceed by using incomplete data from theAgentClusterInstallmanifest excluding theinstall-config.yamlfile overrides. As a consequence, any error in the install-config overrides meant that all configurations passed in theinstall-config.yamlfile overrides were ignored, including the Federal Information Processing Standard (FIPS) mode setting. With this release, the cluster installation proceeds only when the data is valid for both the manifest and its annotations. As a result, cluster installation does not proceed unless all of theinstall-configoverrides are applied successfully. (OCPBUGS-56913) - Before this update, some error messages in the login page of the web console were not localized. As a consequence, those messages were always displayed in English. With this release, these error messages now adapt to your preferred language. (OCPBUGS-56915)
-
With this release, when service endpoints are deleted or updated, the cleanup process correctly uses the service port to match and remove stale
conntrackentries. This change ensures that network connectivity continues to work reliably across endpoint lifecycle events. (OCPBUGS-57053) -
Before this update, when clusters that used multi-architecture release payload images with the
ClusterVersionobjectspec.desiredUpdate.architecturefield set toMulti, the Cluster Version Operator (CVO) was not populated with update recommendations for later releases. With this release, a comparison of incorrect values is fixed. As a result, the update recommendations are populated. (OCPBUGS-57646) -
Before this update, a pod with a secondary interface in an OVN-Kubernetes
localnetnetwork (mapped to thebr-exbridge) could communicate with pods on the same node that used the default network for connectivity only if thelocalnetIP addresses were within the same subnet as the host network. With this release, thelocalnetIP addresses can be drawn from any subnet. In this generalized case, an external router outside the cluster is expected to connect thelocalnetsubnet to the host network. (OCPBUGS-59657) -
Before this update, a problem with signal handling in the
keepalivedcontainer caused an unnecessary delay in failover when the nodes were restarted. As a consequence, access to the API and ingress services was disrupted. With this release, the signal handling is fixed so failover occurs immediately. As a result, disruption to the API and ingress services is minimized. (OCPBUGS-59925) -
Before this update, the
must-gatherscript failed because of an improperpkillcommand syntax with multiple patterns. As a consequence, the script continued to run despite disk space issues and apkillfailure. With this release, themust-gatherscript correctly handlespkillwith multiple patterns. As a result, the script stops running when disk space is insufficient, which improves system stability. (OCPBUGS-59951) -
Before this update, any custom label and annotation added to the
openshift-nmstatenamespace was incorrectly removed. With this release, a fix is appied to OpenShift Container Platform so that any custom label and annotation added to theopenshift-nmstatenamespace is not removed in error. (OCPBUGS-60083) - Before this update, gRPC connection logs were set at a highly verbose log level. This generated an excessive number of messages, which caused the logs to overflow. With this release, the gRPC connection logs have been moved to the V(4) log level. Consequently, the logs no longer overflow, because these specific messages are now less verbose by default. (OCPBUGS-60108)
-
Before this update, cluster namespaces with the "nodes" suffix in their name would cause the Performance Profile Creator (PPC) to fail when mistaking incorrect
namespacedirectories with themust-gathernodesdirectories in themust-gatherprocessed by the PPC. With this release, the PPC now correctly excludes thenamespacesdirectory when processing the must-gather data to create a suggestedPerformanceProfilevalue. (OCPBUGS-60218) - Before this update, during failover, the system’s duplicate address detection (DAD) could incorrectly disable the Egress IPv6 address if it was briefly present on both nodes, breaking the connection. With this release, the Egress IPv6 is configured to skip the DAD check during failover, guaranteeing uninterrupted egress IPv6 traffic after an Egress IP address successfully moves to a different node and ensuring greater network stability. (OCPBUGS-60468)
- Before this update, an external actor could uncordon a node that the Machine Config Operator (MCO) is draining. As a consequence, the MCO and the scheduler would schedule and unschedule pods at the same time, which prolonged the drain process. With this release, the MCO attempts to recordon the node if an external actor uncordons it during the drain process. As a result, the MCO and scheduler no longer schedule and remove pods at the same time. (OCPBUGS-60537)
-
Before this update, the bare-metal installer-provisioned infrastructure deployment accepted non-integer
mtuvalues in theinstall-config.yamlfile. As a consequence, invalidmtuvalues in the yaml file caused runtime errors. With this release, the installation program validatesmtuas an integer in theinstall-config.yamlfile. As a result, runtime errors are prevented. (OCPBUGS-60752) -
Before this update, when a
MachineDeploymentobject was in the process of upgrading its machines and the Cluster Autoscaler was also scaling theMachineDeploymentobject, the Cluster Autoscaler could remove new machines by scaling down theMachineDeploymentobject for under-utilized nodes. With this release, scale down does not occur when aMachineDeploymentobject is in the process of upgrading its machines. (OCPBUGS-60790) -
Before this update, the
MachineHealthCheckcustom resource definition (CRD) did not document the default value for themaxUnhealthyfield. With this release, the CRD documents the default value. (OCPBUGS-60901) -
Before this update, a bug prevented the
ValidatingAdmissionPolicyresource from applying to certain OpenShift Container Platform API resources, such as theBuildConfigandDeploymentConfigresources. As a consequence, custom admission policies were not enforced on these specific resources, which potentially allowed configurations that did not meet organizational standards to be created or updated. With this release, the validation logic has been corrected to ensure that theValidatingAdmissionPolicyresource now correctly identifies and applies to all intended OpenShift Container Platform resources. As a result, users can consistently enforce policies across their entire cluster, including theBuildConfigandDeploymentConfigresources. (OCPBUGS-61056) - Before this update, the YAML editor in the web console would default to indenting YAML files with four spaces. With this release, the default indentation has changed to two spaces to align with recommendations. (OCPBUGS-61393)
-
Before this update, the Cluster Ingress Operator pod restarted with existing
IngressControllerresources inAvailableorDegradedstatus, which caused theingress_controller_conditionsmetric to disappear from the Operator’s/metricsendpoint. As a consequence, you could not monitor theIngressControllerstatus following a pod restart. With this release, theIngressControllerConditionsmetric is set during every reconciliation cycle, regardless of whether an Ingress controller status update occurred, which ensures reliable and continuous monitoring of theIngressControllerhealth. (OCPBUGS-61508) - Before this update, the Operand details page in the web console would show additional status items in a third column, which resulted in the content appearing squashed. With this update, only two columns display in the Operand details page. (OCPBUGS-61519)
- Before this update, when you resized the persistent volume claim (PVC) close to creation time, the PV bound to PVC would sometimes not be found. With this release, the PV bound to PVC is found when you resize the PVC close to creation time. (OCPBUGS-61547)
-
Before this update, an
NMStateservice failure occurred in OpenShift Container Platform deployments because of aNetworkManager-wait-onlinedependency issue in bare metal and multiple network interface controller (NIC) environments. As a consequence, an incorrect network configuration caused deployment failures. With this release, theNetworkManager-wait-onlinedependency for bare metal deployments is updated, which reduces deployment failures and ensuresNMStateservice stability. (OCPBUGS-61695) -
Before this update, deploying hosted control planes on OpenShift Container Platform 4.20 and later with user-supplied
ignition-server-serving-certandignition-server-ca-cert secretsparameters, along with thedisable-pki-reconciliation annotationparameter, caused the system to remove the user supplied ignition secrets and theignition-serverpods to fail. With this release, theIgnition-serversecrets are preserved during reconciliation after removing the delete action for thedisable-pki-reconciliationannotation, which ensures thatignition-serverpods start up completely. (OCPBUGS-61776) -
Before this update, OpenShift Container Platform 4.16 and later versions failed to respect the timeout
http-keep-alivesetting due to a known upstream HAProxy bug, preventing users from effectively managing connection persistence. This lack of control resulted in inconsistent connection behaviors, where long-lived sessions might be terminated unexpectedly or held open longer than normal. With this release, theHTTPKeepAliveTimeouttuning option has been integrated into theIngressControllerAPI, providing a formal way for customers to configure and enforce this specific timeout. As a result, cluster administrators have the granular control necessary to align connection persistence with specific application needs. (OCPBUGS-61858) -
Before this update, when a HyperShift
HostedClusterused external Domain Name Service (DNS) domains and endpoint access with PublicAndPrivate, theallowedCIDRBlocksparameters were incorrectly applied only to the internalkube-apiserve, which left the control plane Operator in an error state. With this release, the control plane Operator functions correctly and theLoadBalancerSourceRangesconfiguration is added to the external routerLoadBalancerservice. As a result, the externalkube-apiserveraccess is properly restricted to the specifiedallowedCIDRBlocksparameter. (OCPBUGS-61941) - Before this update, after disabling the local Alertmanager, Prometheus retained Alertmanager configuration references, which caused failed Domain Name Service (DNS) queries and false alerts. With this release, the Cluster Monitoring Operator removes Alertmanager endpoints from Prometheus when Alertmanager is disabled. As result, false alerts and failed DNS queries are prevented. (OCPBUGS-62160)
- Before this update, two identical copies of the same controller were updating the same Certificate Authority (CA) bundle in a ConfigMap causing them to receive different metadata inputs, rewrite each other’s changes, and create duplicate events. With this release, the controllers use optimistic updating and server-side apply to avoid update events and handle update conflicts. As a result, metadata updates do not trigger duplicate events, and the expected metadata is set correctly. (OCPBUGS-62255)
-
Before this update, the
AdminNetworkPolicy,AdminPolicyBasedRouteListers,EgressFirewall,EgressQoSandNetworkQoSobjects kept themanagedFieldsstatus entries for nodes that had been deleted. As a consequence, a buildup of stale data occurred in etcd for large clusters with frequent node churn. With this release, the cleanup logic is fixed for all of these resource types. As a result, stale data buildup does not occur. (OCPBUGS-62262) - Before this update, when you directly navigated to a page created by a web console dynamic plugin, the web console might have redirected you to a different URL. With this release, the URL redirect is removed. (OCPBUGS-62296)
- Before this update, Node log length was not limited. In the case of an extremely large log, this unlimited length could cause the log to not display or the browser to crash. With this release, the limits of a node log length has been updated to 1000 lines so the log displays correctly. (OCPBUGS-62483)
- Before this update, certain InfiniBand hardware configurations could trigger invalid responses that disrupted the metrics collection process, As a consequence, no Infiniband metrics were generated. With this release, node-exporter properly handles these hardware-level reporting errors, which ensures continuous monitoring and data availability. (OCPBUGS-62727)
- Before this update, changes in Red Hat Enterprise Linux CoreOS (RHCOS) image layering increased space usage on the rendezvous node ephemeral temporary file system to approximately 9.4GB during cluster bootstrapping. As a consequence, because the ephemeral temporary file system was capped at 50% of available RAM, installation would fail on hosts with less than 19GiB of memory due to insufficient space for container images. With this release, the additional data is moved to a separate temporary file system. As a result, any rendezvous host meeting the minimum RAM requirement for a control plane node (16GB) has sufficient capacity to successfully bootstrap a cluster. (OCPBUGS-62790)
-
Before this update, a bug in the construction of the
oc adm inspect --all-namespacescommand prevented themust-gatherutility from correctly identifying and capturing specific resource types. As a consequence, there was incomplete diagnostic data because information regarding theleasesandcsistoragecapacitiesresources, and theassisted-installernamespace, was missing from generated bundles. With this release, the command construction logic is corrected to ensure that these resources are properly targeted during the inspection process. As a result, themust-gatherutility provides a comprehensive set of logs and metadata for these components, which enables more effective troubleshooting of storage and installation issues. (OCPBUGS-63189) -
Before this update, the Observe → Metric page used the cluster-wide metrics API even when you did not have cluster-wide metrics API permissions. As a consequence, the query input showed an error and the autofill for the query input did not work without cluster-wide metrics API access. With this release, the
namespace-tenancymetrics API is used if you do not have cluster-wide metrics API permissions, As a result, an error does not occur and autofill is available for the metrics within the selected namespace. (OCPBUGS-63429) - Before this update, some Actions drop-down menus in the OpenShift Container Platform console contained only one menu item, which required you to open a menu for a single task. With this release, the single-item drop-down menus have been simplified to buttons. (OCPBUGS-63471)
- Before this update, during rolling cluster updates from etcd 3.5.19 to a release of 3.6, the wrong membership data could be propagated to new members. As a consequence, cluster updates failed with an error about too many learner members in the cluster. With this release, etcd is updated to 3.5.24, which includes fixes so that the membership-related errors do not occur. (OCPBUGS-63473)
-
Before this update, the
oc-mirrortool generatedImageDigestMirrorSet(IDMS) files that included an unnecessary, emptystatus:line at the end of the configuration. As a consequence, automated deployment failures and API validation errors occurred because the empty field was often flagged as an invalid schema or an incomplete resource during the synchronization process. With this release, the file generation logic has been corrected to completely strip the empty status line from all generated IDMS files. As a result, automated workflows are streamlined by ensuring that mirrored files are immediately compatible with Kubernetes-native tools. This compatability enables more reliable GitOps integrations and reduces manual intervention during large-scale deployments. (OCPBUGS-63480) -
Before this update, the
ccoctlutility would always generate a newkeypairwhen the private key was not found in the output directory. Several documented procedures instruct the user to extract only the public key from an existing cluster before using theccoctlutility in order to reduce the risk of the private key being compromised. As a result, users following these processes were experiencing service outages due to the newly generated keypair not matching the cluster itself. With this release, a newkeypairis never generated when a public key is specified with the--public-key-fileparameter. It also ensures this parameter exists on all of thecreate-allfunctions in order to extend this functionality. As a result, specifying the--public-key-fileensures the specified public key is used and the cluster continues to function as expected. (OCPBUGS-63541) -
Before this update, the
ccoctlutility did not support pagination when retrievingCloudFrontdistributions. As a result, if the distribution to be deleted was not included in the first batch of results, theCloudFrontdistribution and its associated origin access identity could not be deleted successfully during theccoctlAmazon Web Services (AWS) delete operation. With this release, poagination support is added to theccoctlutility when fetchingCloudFrontdistributions. As a result, the distribution can be located and deleted properly. (OCPBUGS-63561) -
Before this update, newer vSphere clusters using the YAML-based cloud configuration format were failing to honor the minimum read-only permissions provided for
ResourcePools. As a consequence, clusters ignored specified permission constraints, which might have led to deployment errors or security policy violations when interacting with vSphere resources. With this release, the cloud configuration logic is corrected to ensure that YAML-based configurations strictly enforce the required read-only permissions forResourcePools. As a result, cluster operations correctly respect the provided vSphere permission set, which ensures stable and secure resource management within restricted environments. (OCPBUGS-63598) -
Before this update, the CLI’s
GenerateNodePools()function incorrectly setAzureMarketplaceto nil when you specified the--image-generationflag without additional marketplace flags, which discarded your preference. Also, thenodepoolcontroller failed to set theImageGenerationflag when creating images from the release payload, which caused them to default to Gen2. As a consequence, when users attempted to create Microsoft Azure hosted clusters using--image-generationGen1, theNodePoolsparameters were incorrectly provisioned with Gen2 images, which ignored the explicit configuration. With this release, the CLI is modified to preserve your preference by creating a properAzureMarketplaceImagestructure, and thenodepoolcontroller explicitly sets the generation field based on the release payload (mapping Gen1 for HyperVGen1 and Gen2 for HyperVGen2). As a result, the--image-generationflag is now fully respected, which allows you to successfully deployNodePoolswith their chosen image generation without being overwritten by system defaults. (OCPBUGS-63613) -
Before this update, the Microsoft Azure Machine API provider incorrectly attempted to use a default
platformUpdateDomainCountparameter value of5, even in specific regions, such as CentralUSEUAP, that are restricted to a single fault domain. As a consequence, machine creation failed for all node types in these affected regions because Azure supports only one update domain when the fault domain count is set to1. With this release, the logic is updated to explicitly set theplatformUpdateDomainCountvalue to1whenever a single fault domain is detected. As a result, Availability Sets are created with valid parameter combinations, which allows nodes to successfully provision in Azure regions that use a single fault domain. (OCPBUGS-63729) -
Before this update, Operator deployment templates with the
hostUsers: falseflag were not processed by the Cluster Version Operator (CVO), causing their omission in the resulting deployment. As a consequence, user deployments withhostUsers: falseflag were missing this field, which caused incomplete deployments. With this release, support for thehostUsersflag in resource merge is added, which resolves the missinghostUsersissue in deployments. As a result, Operator deployment templates with thehostUsers: falseflag are correctly picked up by CVO, which ensures complete deployments. (OCPBUGS-64732) - Before this update, a race condition in Redfish Power interface occurred during simultaneous power operations, which caused power operations to fail. As a consequence, you could not manage power settings reliably. With this release, the race condition in Redfish Power interface is resolved, which ensures reliable power operations. (OCPBUGS-64845)
-
Before this update, the firewall delete permission was missing in the service account for the destroy process. As a consequence, the destroy process ran indefinitely due to pending firewall deletions. With this release, the required
compute.firewalls.deletepermission for firewalls in the destroy process has been added. As a result, the destroy process no longer runs indefinitely, which improves the efficiency of the delete operation. (OCPBUGS-65512) - Before this update, the Ironic API advertised an unreachable IP despite being routable. As a consequence, the unreachable Ironic API caused service disruption. With this release, the Ironic API advertised IP is now checked for reachability, in addition to routability. As a result, the unreachable Ironic API IP does not cause disruptions. (OCPBUGS-65518)
-
Before this update, primary NIC deletion failure occurred in Red Hat OpenStack Platform (RHOSP) due to policy prevention before instance deletion in the old
cluster-api-provider-openstackversion. As a consequence, instance deletion failed on RHOSP cloud providers with custom policies. With this release, we moved port deletion after instance deletion incluster-api-provider-openstack. As a result, primary NICs are not prevented from deletion on RHOSP cloud providers, which allows successful instance deletion. (OCPBUGS-65712) - Before this update, when opening a terminal to a running pod, the session was disconnected whenever the annotations of the pod changed. With this release, the terminal session does not disconnect when this metadata is changed. (OCPBUGS-65776)
-
Before this update, an incorrect MAC address conflict from HPE Virtual NIC occurred. As a consequence, two
BareMetalHostnodes remained stuck in inspection due to a conflict with a virtual NIC, which caused repeated lookup failures and prevented hardware inspection completion. With this release, disabling HPE Virtual NIC resolves the MAC address conflict, which allows the completion of the hardware inspection. (OCPBUGS-65961) -
Before this update, the
HostedClustercommand failed due to an invalid API server service with theLoadBalancerSourceRangesparameter when you set theallowedCIDRBlocks,externalDns, andpublicAndPrivateparameters. As a consequence, a control plane failure occurred due to an invalid APIserverservice configuration. With this release, the issue with API server service invalidity when setting theallowedCIDRBlocksparameter with theexternalDnsandpublicAndPrivateparameters is fixed. As a result, the control plane does not fail when setting theallowedCIDRBlocksparameter with theexternalDnsandpublicAndPrivateparameters (OCPBUGS-66067) - Before this update, the OpenShift Container Platform console required the Quick starts feature to completely load before the console loaded. With this release, Quick starts load asynchronously, which optimizes the loading time of the OpenShift Container Platform console by about 30 to 50%. (OCPBUGS-66258)
- Before this update, during the cluster deletion got stuck during the inspection phase due to a power off stage transition. As a consequence, the cluster was not deleted. With this release, the bare-metal host (BMH) is prevented from getting stuck during deletion in a ZTP environment. As a result, the cluster removal is prevented from getting stuck during the inspection phase, which improves the efficiency of the ZTP environment. (OCPBUGS-68369)
-
Before this update, the application selector on the Topology page was reset to
All applicationsafter you selected an application. With this release, you can successfully apply the application on the Topology page. (OCPBUGS-69388) - Before this update, the systemd use in the container entry point prevented the ConfigMap mount from working correctly, which caused broken file permissions. As a consequence, users could not access config files due to the broken file permissions in the containers. With this release, the systemd entry point issue is resolved, which allows the config map mount with the correct file permissions. As a result, the file permissions are correct for the containers. (OCPBUGS-69669)
- Before this update, the instance architecture in worker machines did not match the Amazon Machine Images (AMI) architecture, which caused a mismatch. As a consequence, installation failure occurred on the arm64 architecture because of incorrect instance provisioning. With this release, the architecture mismatch is resolved. As a result, AMI architecture installations are successful. (OCPBUGS-69965)
-
Before this update, the Ironic image default
IRONIC_CACERT_FILEwas a read-only path, which caused failure when you copied cert files for self-signed certificates. As a consequence, cert files were not copied because of the read-only path in the Ironic image. With this release, theIRONIC_CACERT_FILEdefault path is changed from read-only toCUSTOM_CONFIG_DIR. As a result, the Ironic image successfully copies cert files in self-signed scenarios. (OCPBUGS-70156) -
Before this update, Ironic wrote to the Read-only
/certs/ca/ironicpath because of a missingironic-ca-certpath setting. As a consequence, deployment failed. With this release, Ironic does not write to the Read-only path, which improves system stability. (OCPBUGS-70163) -
Before this update, the storage Operator did not create the required
RoleBindingvalue for Prometheus in theopenshift-cluster-csi-driversnamespace, which caused the monitoring of the CSI driver namespace to fail and prevented metric collection. With this release, the storage Operator creates theRoleBindingvalue for the Prometheus Service Account in the {openshift-cluster-csi-drivers} namespace. As a result, CSI drivers are successfully monitored during HyperShift Microsoft Azure Kubernetes Service (AKS) runs. (OCPBUGS-72509) - Before this update, Amazon Web Services (AWS) inconsistency caused an instance ID leak because of checks that were not updated in status. As a consequence, machine creation led to instance leaks because of the inconsistent AWS responses. With this release, the instance ID storage inconsistency in the machine creation is fixed. As a result, VM leaks do not occur, which ensures consistent machine creation. (OCPBUGS-72523)
- Before this update, when you installed on Amazon Web Services (AWS) where the installation program provisions the Virtual Private Cloud (VPC), a potential mismatch could occur in the subnet information in the AWS Availability Zone between the machine set custom resources for control plane nodes and their corresponding EC2 instances. As a consequence, where the control plane nodes were spread across three Availability Zones, and one was recreated, the discrepancy could result in an unbalanced control plane as two nodes occurred within the same Availability Zone. With this release, the subnet Availability Zone information is the same in the machine set custom resources and in the EC2 instances. (OCPBUGS-73773)
- Before this update, catalog sync triggered high I/O on masters, and caused etcd leader election and TTL counter resets. As a consequence, catalog sync caused high I/O and etcd events persistence, which affected user cluster performance. With this release, catalog sync duration is reduced from 10 minutes to four hours. As a result, I/O load and etcd events are reduced and catalog sync duration is minimized. (OCPBUGS-73881)
- Before this update, when navigating to the Software Catalog page with the Devfiles category disabled, the page showed a Devfile-related error in a disconnected cluster. With this release, this error is not shown. (OCPBUGS-74157)
-
Before this update, upgrading to OpenShift Container Platform 4.18 caused loss of network connectivity for VM pods using
ovn-k8s-cni-overlaylocalnet NetworkAttachmentDefinitions. As a consequence, VM network connectivity was lost during the upgrade, requiring pod or VM restarts. With this release, the upgrade process fix includes logical switch port creation for VMs during the 4.18 upgrade. As a result, VMs maintain network connectivity during and after upgrading to OpenShift Container Platform 4.18. (OCPBUGS-74267) -
Before this update, a duplicate channel in
catalog.jsoncaused a mirroring failure. As a consequence, mirroring operation failed for some Operators due to a duplicate channel error. With this release, the mirroring issue is fixed. As a result, mirroring to enclaves succeeds without duplicate channel errors in version 4.21.0 and later. (OCPBUGS-74577) -
Before this update, the
DaemonSetobjects incorrectly used the status field, which caused a message that displayed0 of pods. As a consequence, incorrect pod counts occurred for theDaemonSetsobjects. With this release, theDaemonSetstatus replicas are correctly displayed. (OCPBUGS-74587) -
Before this update, the Control Plane Machine Set Operator did not detect changes to the
throughputMibfield in theAWSMachineProviderConfigcustom resource (CR). As a consequence, the Control Plane Machine Set had incorrect default values for this parameter and did not support increasing the throughput of gp3 storage volumes in an AWS cluster. With this release, the internal API definitions used by the Control Plane Machine Set Operator are updated to recognize thethroughputMibfield. As a result, the Control Plane Machine Set Operator now identifies, reconciles, and applies changes to thethroughputMibfield. (OCPBUGS-74588) -
Before this update, static pods terminated prematurely due to an ignored
priorityClassNamefield in kubelet. As a consequence, long shutdown times and storage issues occurred in Single Node single-node OpenShift environments. With this release, the static pod shutdown order is fixed. As a result, long shutdown times and storage layer issues are reduced in single-node OpenShift environments. (OCPBUGS-74621) -
Before this update, unexpected issues occurred during the initial provisioning of a
HostedClusterresource. With this release, the system uses theControlPlaneComponentresource to ensure that theHostedClusterresource is available only after all control plane components have been successfully rolled out. As a result, the rollout status of each control plane component is accurately tracked, which reduces the risk of unexpected issues. (OCPBUGS-74648) -
Before this update, Google Cloud installations failed due to unspecified zones in the
us-south1andus-central1regions, which caused installation issues. With this release, the GCP installer requires that you specify zones ininstall-configfor regions with AI zones. As a result, Google Cloud installations do not fail in these specified regions. (OCPBUGS-74672)
1.11.2.4. Updating Copy linkLink copied to clipboard!
To update an OpenShift Container Platform 4.21 cluster to this latest release, see Updating a cluster using the CLI.
Chapter 2. Additional release notes Copy linkLink copied to clipboard!
Release notes for additional related components and products not included in the core OpenShift Container Platform 4.21 release notes are available in the following documentation.
The following release notes are for downstream Red Hat products only; upstream or community release notes for related products are not included.
- A
- AWS Load Balancer Operator
- B
- Builds for Red Hat OpenShift
- C
cert-manager Operator for Red Hat OpenShift
- D
- Red Hat Developer Hub Operator
- E
- F
- File Integrity Operator
- K
- L
- M
- N
- O
OpenShift API for Data Protection (OADP)
Red Hat OpenShift Distributed Tracing Platform
Red Hat OpenShift Local (Upstream CRC documentation)
OpenShift sandboxed containers
Red Hat OpenShift Service Mesh 2.x
Red Hat OpenShift Service Mesh 3.x
- P
- Power monitoring for Red Hat OpenShift
- R
- Run Once Duration Override Operator
- S
- W
- Red Hat OpenShift support for Windows Containers
- Z
- Zero Trust Workload Identity Manager
Legal Notice
Copy linkLink copied to clipboard!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.