Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 1. OpenShift Container Platform 4.20 release notes
Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.
Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.
1.1. About this release Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform (RHSA-2025:9562) is now available. This release uses Kubernetes 1.33 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.20 are included in this topic.
OpenShift Container Platform 4.20 clusters are available at https://console.redhat.com/openshift. From the Red Hat Hybrid Cloud Console, you can deploy OpenShift Container Platform clusters to either on-premises or cloud environments.
You must use RHCOS machines for the control plane and for the compute machines.
Starting from OpenShift Container Platform 4.14, the Extended Update Support (EUS) phase for even-numbered releases increases the total available lifecycle to 24 months on all supported architectures, including x86_64
, 64-bit ARM (aarch64
), IBM Power® (ppc64le
), and IBM Z® (s390x
) architectures. Beyond this, Red Hat also offers a 12-month additional EUS add-on, denoted as Additional EUS Term 2, that extends the total available lifecycle from 24 months to 36 months. The Additional EUS Term 2 is available on all architecture variants of OpenShift Container Platform. For more information about support for all versions, see the Red Hat OpenShift Container Platform Life Cycle Policy.
OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64
, ppc64le
, and s390x
architectures.
For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards.
1.2. OpenShift Container Platform layered and dependent component support and compatibility Link kopierenLink in die Zwischenablage kopiert!
The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy.
1.3. New features and enhancements Link kopierenLink in die Zwischenablage kopiert!
This release adds improvements related to the following components and concepts:
1.3.1. API server Link kopierenLink in die Zwischenablage kopiert!
1.3.1.1. Extended loopback certificate validity to three years for kube-apiserver Link kopierenLink in die Zwischenablage kopiert!
Before this update, the self-signed loopback certificate for the Kubernetes API Server expired after one year. With this release, the expiration date of the certificate is extended to three years.
1.3.1.2. Dry-run option is connected to 'oc delete istag' Link kopierenLink in die Zwischenablage kopiert!
Before this update, deleting an istag
resource with the --dry-run=server
option unintentionally caused actual deletion of the image from the server. This unexpected deletion occurred due to the dry-run
option being implemented incorrectly in the oc delete istag
command. With this release, the dry-run
option is wired to the oc delete istag
command. As a result, the accidental deletion of image objects is prevented and the istag
object remains intact when using the --dry-run=server
option.
1.3.1.3. No service interruptions for certificate-related issues Link kopierenLink in die Zwischenablage kopiert!
With this update, self-signed loopback certificates in API servers are prevented from expiring, and ensures a stable and secure connection within Kubernetes 4.16.z. This enhancement backports a solution from a newer version, cherry-picks a specific pull request and applies it to the selected version. This reduces the likelihood of service interruptions due to certificate-related issues, providing a more reliable user experience in Kubernetes 4.16.z deployments.
1.3.1.4. Enhanced communication matrix for TCP ports Link kopierenLink in die Zwischenablage kopiert!
With this update, the communication flows matrix for OpenShift Container Platform is enhanced. The feature automatically generates services for open ports 17697 (TCP) and 6080 (TCP) on the primary node, and ensures that all open ports have corresponding endpoint slices. This results in accurate and up-to-date communication flows matrixes, improves the overall security and efficiency of the communication matrix, and provides a more comprehensive and reliable communication matrix for users.
1.3.2. Edge computing Link kopierenLink in die Zwischenablage kopiert!
1.3.2.1. NetworkPolicy support for the LVM Storage Operator Link kopierenLink in die Zwischenablage kopiert!
The LVM Storage Operator now applies Kubernetes NetworkPolicy
objects during installation to restrict network communication to only the required components. This feature enforces default network isolation for LVM Storage deployments on OpenShift Container Platform clusters.
1.3.2.2. Support for hostname labelling for persistent volumes created by using the LVM Storage Operator Link kopierenLink in die Zwischenablage kopiert!
When you create a persistent volume (PV) by using the LVM Storage Operator, the PV now includes the kubernetes.io/hostname
label. This label shows which node the PV is located on, making it easier to identify the node associated with a workload. This change only applies to newly created PVs. Existing PVs are not modified.
1.3.2.3. Default namespace for the LVM Storage Operator Link kopierenLink in die Zwischenablage kopiert!
The default namespace for the LVM Storage Operator is now openshift-lvm-storage
. You can still install LVM Storage in a custom namespace.
1.3.2.4. SiteConfig CR to ClusterInstance CR migration tool Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform 4.20 introduces the siteconfig-converter
tool to help migrate managed clusters from using a SiteConfig
custom resource (CR) to a ClusterInstance
CR. Using a SiteConfig
CR to define a managed cluster is deprecated and will be removed in a future release. The ClusterInstance
CR provides a more unified and generic approach to defining clusters and is the preferred method for managing cluster deployments in the GitOps ZTP workflow.
Using the siteconfig-converter
tool, you can convert SiteConfig
CRs to ClusterInstance
CRs and then incrementally migrate one or more clusters at a time. Existing and new pipelines run in parallel, so you can migrate clusters in a controlled, phased manner and without downtime.
The siteconfig-converter
tool does not convert SiteConfig CRs that use the deprecated spec.clusters.extraManifestPath
field.
For more information, see Migrating from SiteConfig CRs to ClusterInstance CRs.
1.3.3. etcd Link kopierenLink in die Zwischenablage kopiert!
With this update, the Cluster etcd Operator introduces alert levels for the etcdDatabaseQuotaLowSpace
alert, offering administrators timely notifications about low etcd quota usage. This proactive alert system aims to prevent API server instability and allows for effective resource management in managed OpenShift clusters. The alert levels are info
, warning
, and critical
, providing a more granular approach to monitoring etcd quota usage, which results in dynamic etcd quota management and improved overall cluster performance.
1.3.3.1. Configuring a local arbiter node Link kopierenLink in die Zwischenablage kopiert!
You can configure an OpenShift Container Platform cluster with two control plane nodes and one local arbiter node to retain high availability (HA) while reducing infrastructure costs for your cluster.
A local arbiter node is a lower-cost, co-located machine that participates in control plane quorum decisions. Unlike a standard control plane node, the arbiter node does not run the full set of control plane services. You can use this configuration to maintain HA in your cluster with only two fully provisioned control plane nodes instead of three.
This feature is now Generally Available.
For more information, see Configuring a local arbiter node.
1.3.3.2. Configuring a two-node OpenShift cluster with fencing (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
A two-node OpenShift cluster with fencing provides high availability (HA) with a reduced hardware footprint. This configuration is designed for distributed or edge environments where deploying a full three-node control plane cluster is not practical.
A two-node cluster does not include compute nodes. The two control plane machines run user workloads in addition to managing the cluster.
You can deploy a two-node OpenShift cluster with fencing by using either the user-provisioned infrastructure method or the installer-provisioned infrastructure method.
For more information, see Preparing to install a two-node OpenShift cluster with fencing.
1.3.4. Extensions (OLM v1) Link kopierenLink in die Zwischenablage kopiert!
1.3.4.1. Deploying cluster extensions that use webhooks (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
With this release, you can deploy cluster extensions that use webhooks on clusters with the TechPreviewNoUpgrade
feature set enabled.
For more information, see Supported extensions.
1.3.5. Hosted control planes Link kopierenLink in die Zwischenablage kopiert!
Because hosted control planes releases asynchronously from OpenShift Container Platform, it has its own release notes. For more information, see Hosted control planes release notes.
1.3.6. IBM Power Link kopierenLink in die Zwischenablage kopiert!
The IBM Power® release on OpenShift Container Platform 4.20 adds improvements and new capabilities to OpenShift Container Platform components.
This release introduces support for the following features on IBM Power:
- Enable accelerators on IBM Power®
1.3.7. IBM Z and IBM LinuxONE Link kopierenLink in die Zwischenablage kopiert!
The IBM Z® and IBM® LinuxONE release on OpenShift Container Platform 4.20 adds improvements and new capabilities to OpenShift Container Platform components.
This release introduces support for the following features on IBM Z® and IBM® LinuxONE:
- Enable accelerators on IBM Z®
1.3.8. IBM Power, IBM Z, and IBM LinuxONE support matrix Link kopierenLink in die Zwischenablage kopiert!
Starting in OpenShift Container Platform 4.14, Extended Update Support (EUS) is extended to the IBM Power® and the IBM Z® platform. For more information, see the OpenShift EUS Overview.
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
Cloning | Supported | Supported |
Expansion | Supported | Supported |
Snapshot | Supported | Supported |
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
Bridge | Supported | Supported |
Host-device | Supported | Supported |
IPAM | Supported | Supported |
IPVLAN | Supported | Supported |
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
Adding compute nodes to on-premise clusters using OpenShift CLI ( | Supported | Supported |
Alternate authentication providers | Supported | Supported |
Agent-based Installer | Supported | Supported |
Assisted Installer | Supported | Supported |
Automatic Device Discovery with Local Storage Operator | Unsupported | Supported |
Automatic repair of damaged machines with machine health checking | Unsupported | Unsupported |
Cloud controller manager for IBM Cloud® | Supported | Unsupported |
Controlling overcommit and managing container density on nodes | Unsupported | Unsupported |
CPU manager | Supported | Supported |
Cron jobs | Supported | Supported |
Descheduler | Supported | Supported |
Egress IP | Supported | Supported |
Encrypting data stored in etcd | Supported | Supported |
FIPS cryptography | Supported | Supported |
Helm | Supported | Supported |
Horizontal pod autoscaling | Supported | Supported |
Hosted control planes | Supported | Supported |
IBM Secure Execution | Unsupported | Supported |
Installer-provisioned Infrastructure Enablement for IBM Power® Virtual Server | Supported | Unsupported |
Installing on a single node | Supported | Supported |
IPv6 | Supported | Supported |
Monitoring for user-defined projects | Supported | Supported |
Multi-architecture compute nodes | Supported | Supported |
Multi-architecture control plane | Supported | Supported |
Multipathing | Supported | Supported |
Network-Bound Disk Encryption - External Tang Server | Supported | Supported |
Non-volatile memory express drives (NVMe) | Supported | Unsupported |
nx-gzip for Power10 (Hardware Acceleration) | Supported | Unsupported |
oc-mirror plugin | Supported | Supported |
OpenShift CLI ( | Supported | Supported |
Operator API | Supported | Supported |
OpenShift Virtualization | Unsupported | Supported |
OVN-Kubernetes, including IPsec encryption | Supported | Supported |
PodDisruptionBudget | Supported | Supported |
Precision Time Protocol (PTP) hardware | Unsupported | Unsupported |
Red Hat OpenShift Local | Unsupported | Unsupported |
Scheduler profiles | Supported | Supported |
Secure Boot | Unsupported | Supported |
Stream Control Transmission Protocol (SCTP) | Supported | Supported |
Support for multiple network interfaces | Supported | Supported |
The | Supported | Unsupported |
Three-node cluster support | Supported | Supported |
Topology Manager | Supported | Unsupported |
z/VM Emulated FBA devices on SCSI disks | Unsupported | Supported |
4K FCP block device | Supported | Supported |
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
cert-manager Operator for Red Hat OpenShift | Supported | Supported |
Cluster Logging Operator | Supported | Supported |
Cluster Resource Override Operator | Supported | Supported |
Compliance Operator | Supported | Supported |
Cost Management Metrics Operator | Supported | Supported |
File Integrity Operator | Supported | Supported |
HyperShift Operator | Supported | Supported |
IBM Power® Virtual Server Block CSI Driver Operator | Supported | Unsupported |
Ingress Node Firewall Operator | Supported | Supported |
Local Storage Operator | Supported | Supported |
MetalLB Operator | Supported | Supported |
Network Observability Operator | Supported | Supported |
NFD Operator | Supported | Supported |
NMState Operator | Supported | Supported |
OpenShift Elasticsearch Operator | Supported | Supported |
Vertical Pod Autoscaler Operator | Supported | Supported |
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
Persistent storage using iSCSI | Supported [1] | Supported [1],[2] |
Persistent storage using local volumes (LSO) | Supported [1] | Supported [1],[2] |
Persistent storage using hostPath | Supported [1] | Supported [1],[2] |
Persistent storage using Fibre Channel | Supported [1] | Supported [1],[2] |
Persistent storage using Raw Block | Supported [1] | Supported [1],[2] |
Persistent storage using EDEV/FBA | Supported [1] | Supported [1],[2] |
- Persistent shared storage must be provisioned by using either Red Hat OpenShift Data Foundation or other supported storage protocols.
- Persistent non-shared storage must be provisioned by using local storage, such as iSCSI, FC, or by using LSO with DASD, FCP, or EDEV/FBA.
1.3.9. Insights Operator Link kopierenLink in die Zwischenablage kopiert!
1.3.9.1. Support for obtaining virt-launcher logs across the cluster Link kopierenLink in die Zwischenablage kopiert!
With this release, command line logs from virt-launcher
pods can be collected across a Kubernetes cluster. JSON-encoded logs are saved at the path namespaces/<namespace-name>/pods/<pod-name>/virt-launcher.json
, which facilitates troubleshooting and debugging of virtual machines.
1.3.10. Installation and update Link kopierenLink in die Zwischenablage kopiert!
1.3.10.1. Changing the CVO log level (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
With this release, the Cluster Version Operator (CVO) log level verbosity can be changed by the cluster administrator.
For more information, see Changing CVO log level.
1.3.10.2. Installing a cluster on VMware vSphere with multiple network interface controllers (Generally Available) Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform 4.18 enabled you to install a VMware vSphere cluster with multiple network interface controllers (NICs) for a node as a Technology Preview feature. This feature is now Generally Available.
For more information, see Configuring multiple NICs.
For an existing vSphere cluster, you can add multiple subnets by using compute machine sets.
1.3.10.3. Installing a cluster on Google Cloud into a shared VPC specifying a DNS private zone in a third project Link kopierenLink in die Zwischenablage kopiert!
With this release, you can specify the location of a DNS private zone when installing a cluster on Google Cloud into a shared VPC. The private zone can be located in a service project that is distinct from the host project or main service project.
For more information, see Additional Google Cloud configuration parameters.
1.3.10.4. Installing a cluster on Microsoft Azure with virtual network encryption Link kopierenLink in die Zwischenablage kopiert!
With this release, you can install a cluster on Azure using encrypted virtual networks. You are required to use Azure virtual machines that have the premiumIO
parameter set to true
. See Microsoft’s documentation about Creating a virtual network with encryption and Requirements and Limitations for more information.
1.3.10.5. Firewall requirements when installing a cluster that uses IBM Cloud Paks Link kopierenLink in die Zwischenablage kopiert!
With this release, if you install a cluster using IBM Cloud Paks, you must allow outbound access to icr.io
and cp.icr.io
on port 443. This access is required for IBM Cloud Pak container images. For more information, see Configuring your firewall.
1.3.10.6. Installing a cluster on Microsoft Azure using Intel TDX Confidential VMs Link kopierenLink in die Zwischenablage kopiert!
With this release, you can install a cluster on Azure using Intel-based Confidential VMs. The following machine sizes are now supported:
- DCesv5-series
- DCedsv5-series
- ECesv5-series
- ECedsv5-series
For more information, see Enabling confidential VMs.
1.3.10.7. Dedicated disk for etcd on Microsoft Azure (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
With this release, you can install your OpenShift Container Platform cluster on Azure with a dedicated data disk for etcd
. This configuration attaches a separate managed disk to each control plane node and uses it only for etcd
data, which can improve cluster performance and stability. This feature is available as a Technology Preview. For more information, see Configuring a dedicated disk for etcd.
1.3.10.8. Multi-architecture support for bare metal Link kopierenLink in die Zwischenablage kopiert!
With this release, you can install a bare-metal environment that supports multi-architecture capabilities. You can provision both x86_64
and aarch64
architectures from an existing x86_64
cluster by using virtual media, meaning you can manage a diverse hardware environment more efficiently.
For more information, see Configuring your cluster with multi-architecture compute machines.
1.3.10.9. Support for updating the host firmware components of NICs for bare metal Link kopierenLink in die Zwischenablage kopiert!
With this release, the HostFirmwareComponents
resource for bare metal describes network interface controllers (NICs). To update NIC host firmware components, the server must support Redfish and must permit you to use Redfish to update NIC firmware.
For more information, see About the HostFirmwareComponents resource.
1.3.10.10. Required administrator acknowledgment when updating from OpenShift Container Platform 4.19 to 4.20 Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform 4.17, a previously removed Kubernetes API was inadvertently reintroduced. It has been removed again in OpenShift Container Platform 4.20.
Before a cluster can be updated from OpenShift Container Platform 4.19 to 4.20, a cluster administrator must manually provide acknowledgment. This safeguard helps to prevent update issues that could occur if workloads, tools, or other components still depend on the Kubernetes API that has been removed in OpenShift Container Platform 4.20.
Administrators must take the following actions before proceeding with the cluster update:
- Evaluate the cluster for the use of APIs that will be removed.
- Migrate the affected manifests, workloads, and API clients to use the supported API version.
- Provide the administrator acknowledgment that all necessary updates have been made.
All OpenShift Container Platform 4.19 clusters require this administrator acknowledgment before they can be updated to OpenShift Container Platform 4.20.
For more information, see Kubernetes API removals.
1.3.10.11. Using UUIDs for a Transit Gateway and Virtual Private Cloud (VPC) Link kopierenLink in die Zwischenablage kopiert!
Previously, when installing a cluster on IBM Power Virtual Server, you could only specify a name for an existing Transit Gateway or Virtual Private Cloud (VPC). As the uniqueness of names was not guaranteed, this could cause conflicts and installation failures. With this release, you can use Universally Unique Identifiers (UUIDs) for a Transit Gateway and VPC. By using unique identifiers, the installation program can unambiguously identify the correct Transit Gateway or VPC. This prevents the naming conflicts and the issue is resolved.
1.3.11. Machine Config Operator Link kopierenLink in die Zwischenablage kopiert!
1.3.11.1. Updated boot images for vSphere now supported (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
Updated boot images is now supported as a Technology Preview feature for VMware vSphere clusters. This feature allows you configure your cluster to update the node boot image whenever you update your cluster. By default, the boot image in your cluster is not updated along with your cluster. For more information, see Updated boot images.
1.3.11.2. On-cluster image mode reboot improvements Link kopierenLink in die Zwischenablage kopiert!
The following machine configuration changes no longer cause a reboot of nodes with on-cluster custom layered images:
-
Modifying the configuration files in the
/var
or/etc
directory - Adding or modifying a systemd service
- Changing SSH keys
-
Removing mirroring rules from
ICSP
,ITMS
, andIDMS
objects -
Changing the trusted CA, by updating the
user-ca-bundle
configmap in theopenshift-config
namespace
For more information, see On-cluster image mode known limitations.
1.3.11.3. On-cluster image mode status reporting improvements Link kopierenLink in die Zwischenablage kopiert!
When image mode for OpenShift is configured, there are improvements to error reporting including the following changes:
-
In certain scenarios after the custom layered image has been built and pushed, errors could cause the build process to fail. If this happens, the MCO now reports the errors and the
machineosbuild
object and builder pod are reported as failed. -
The
oc describe mcp
output has a newImageBuildDegraded
status field that reports if a custom layered image build has failed.
1.3.11.4. Setting the kernel type parameter is now supported on on-cluster image mode nodes Link kopierenLink in die Zwischenablage kopiert!
You can now use the kernelType
parameter in a MachineConfig
object on nodes with on-cluster custom layered images in order to install a realtime kernel on the node. Previously, on nodes with on-cluster custom layered images the kernelType
parameter was ignored. For information, see Adding a real-time kernel to nodes.
1.3.11.5. Pinning images to nodes Link kopierenLink in die Zwischenablage kopiert!
In clusters with slow, unreliable connections to an image registry, you can use a PinnedImageSet
object to pull the images in advance, before they are needed, then associate those images with a machine config pool. This ensures that the images are available to the nodes in that pool when needed. The must-gather
for the Machine Config Operator includes all PinnedImageSet
objects in the cluster. For more information, see Pinning images to nodes.
1.3.11.6. Improved MCO state reporting is now generally available Link kopierenLink in die Zwischenablage kopiert!
The machine config nodes custom resource, which you can use to monitor the progress of machine configuration updates to nodes, is now generally available.
You can now view the status of updates to custom machine config pools in addition to the control plane and worker pools. The functionality for the feature has not changed. However, some of the information in the command output and in the status fields in the MachineConfigNode
object has been updated. The must-gather
for the Machine Config Operator now includes all MachineConfigNodes
objects in the cluster. For more information, see About checking machine config node status.
1.3.11.7. Enabling direct Link kopierenLink in die Zwischenablage kopiert!
This release includes a new security context constraint (SCC), named hostmount-anyuid-v2
. This SCC provides the same features as the hostmount-anyuid
SCC, but contains seLinuxContext: RunAsAny
. This SCC was added because the hostmount-anyuid
SCC was intended to allow trusted pods to access any paths on the host, but SELinux prevents containers from accessing most paths. The hostmount-anyuid-v2
allows host file system access as any UID, including UID 0, and is intended to be used instead of the privileged
SCC. Grant with caution.
1.3.12. Machine management Link kopierenLink in die Zwischenablage kopiert!
1.3.12.1. Additional AWS Capacity Reservation configuration options Link kopierenLink in die Zwischenablage kopiert!
On clusters that manage machines with the Cluster API, you can specify additional constraints to determine whether your compute machines use AWS capacity reservations. For more information, see Capacity Reservation configuration options.
1.3.12.2. Cluster autoscaler scale up delay Link kopierenLink in die Zwischenablage kopiert!
You can now configure a delay before the cluster autoscaler recognizes newly pending pods and schedules the pods to a new node by using the spec.scaleUp.newPodScaleUpDelay
parameter in the ClusterAutoscaler
CR. If the node remains unscheduled after the delay, the cluster autoscaler can scale up a new node. This delay gives the cluster autoscaler additional time to locate an appropriate node or it can wait for space on an existing pod to become available. For more information, see Configuring the cluster autoscaler.
1.3.13. Monitoring Link kopierenLink in die Zwischenablage kopiert!
The in-cluster monitoring stack for this release includes the following new and modified features:
1.3.13.1. Updates to monitoring stack components and dependencies Link kopierenLink in die Zwischenablage kopiert!
This release includes the following version updates for in-cluster monitoring stack components and dependencies:
- Prometheus to 3.5.0
- Prometheus Operator to 0.85.0
- Metrics Server to 0.8.0
- Thanos to 0.39.2
- kube-state-metrics agent to 2.16.0
- prom-label-proxy to 0.12.0
1.3.13.2. Changes to alerting rules Link kopierenLink in die Zwischenablage kopiert!
Red Hat does not guarantee backward compatibility for recording rules or alerting rules.
-
The expression for the
AlertmanagerClusterFailedToSendAlerts
alert has changed. The alert now evaluates the rate over a longer time period, from5m
to15m
.
1.3.13.3. Support log verbosity configuration for Metrics Server Link kopierenLink in die Zwischenablage kopiert!
With this release, you can configure log verbosity for Metrics Server. You can set a numeric verbosity level to control the amount of logged information, where higher numbers increase the logging detail.
For more information, see Setting log levels for monitoring components.
1.3.14. Networking Link kopierenLink in die Zwischenablage kopiert!
1.3.14.1. Support for Gateway API Inference Extension Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform 4.20 updates Red Hat OpenShift Service Mesh to version 3.1.0, which now supports Red Hat OpenShift AI. This version update incorporates essential CVE fixes, resolves other bugs, and upgrades Istio to version 1.26.2 for improved security and performance. See the Service Mesh 3.1.0 release notes for more information.
1.3.14.2. Support for the BGP routing protocol Link kopierenLink in die Zwischenablage kopiert!
The Cluster Network Operator (CNO) now supports enabling Border Gateway Protocol (BGP) routing. With BGP, you can import and export routes to the underlying provider network and use multi-homing, link redundancy, and fast convergence. BGP configuration is managed with the FRRConfiguration
custom resource (CR).
When upgrading from an earlier version of OpenShift Container Platform in which you installed the MetalLB Operator, you must manually migrate your custom frr-k8s configurations from the metallb-system
namespace to the openshift-frr-k8s
namespace. To move these CRs, enter the following commands:
To create the
openshift-frr-k8s
namespace, enter the following command:oc create namespace openshift-frr-k8s
$ oc create namespace openshift-frr-k8s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To automate the migration, create a
migrate.sh
file with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To run the migration script, enter the following command:
bash migrate.sh
$ bash migrate.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the migration succeeded, enter the following command:
oc get frrconfigurations.frrk8s.metallb.io -n openshift-frr-k8s
$ oc get frrconfigurations.frrk8s.metallb.io -n openshift-frr-k8s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After the migration is complete, you can remove the FRR-K8s
custom resources from the metallb-system
namespace.
For more information, see About BGP routing.
1.3.14.3. Support for route advertisements for cluster user-defined networks (CUDNs) with Border Gateway Protocol (BGP) Link kopierenLink in die Zwischenablage kopiert!
With route advertisements enabled, the OVN-Kubernetes network plugin supports the direct advertisement of routes for pods and services associated with cluster user-defined networks (CUDNs) to the provider network. This feature enables some of the following benefits:
- Learns routes to pods dynamically
- Advertises routes dynamically
- Enables layer 3 notifications of EgressIP failovers in addition to the layer 2 ones based on gratuitous ARPs.
- Supports external route reflectors, which reduces the number of BGP connections required in large networks
For more information, see About route advertisements.
1.3.14.4. Preconfigured user-defined network endpoints only for use with Migration Toolkit for Virtualization (MTV) (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
Preconfigured user-defined network endpoints is available as a Technology Preview and controlled by the feature gate, PreconfiguredUDNAddresses
. You can now explicitly control the overlay network configuration including: IP address, MAC address, and default gateway. This feature is available for Layer 2 as part of the ClusterUserDefinedNetwork
(CUDN) custom resource (CR). Administrators can preconfigure endpoints to migrate KubeVirt virtual machines (VMs) without disruption. To enable the feature use the new fields, reservedSubnets
, infrastructureSubnets
, and defaultGatewayIPs
, found in the CUDN CR. For more information about the configurations, see Additional configuration details for user-defined networks. Currently, static IP addresses are only supported for the ClusterUserDefinedNetworks
CR and only for use with MTV.
1.3.14.5. Support for migrating a configured br-ex bridge to NMState Link kopierenLink in die Zwischenablage kopiert!
If you used the configure-ovs.sh
shell script to set a br-ex
bridge during cluster installation, you can migrate the br-ex
bridge to NMState as a postinstallation task. For more information, see Migrating a configured br-ex bridge to NMState.
1.3.14.6. Configuring enhanced PTP logging Link kopierenLink in die Zwischenablage kopiert!
You can now configure enhanced log reduction for the PTP Operator to reduce the volume of logs generated by the linuxptp-daemon
.
This feature provides a periodic summary of filtered logs, which is not available with basic log reduction. Optionally, you can set a specific interval for the summary logs and a threshold in nanoseconds for the master offset logs.
For more information, see Configuring enhanced PTP logging.
1.3.14.7. PTP ordinary clocks with added redundancy on AArch64 nodes (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
With this release, you can configure PTP ordinary clocks with added redundancy on AArch64 architecture nodes that use the following dual-port NICs only:
- NVIDIA ConnectX-7 series
- NVIDIA BlueField-3 series, in NIC mode
This feature is available as a Technology Preview. For more information, see Using dual-port NICs to improve redundancy for PTP ordinary clocks.
1.3.14.8. Load balancing configuration with bond CNI plugin (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
In this release you can now specify the transmit hash policy for load balancing across the aggregated interfaces with the xmitHashPolicy
as part of bond CNI plugin configuration. This feature is available as a Technology Preview.
For more information, see Configuration for a Bond CNI secondary network.
1.3.14.9. SR-IOV network management in application namespaces Link kopierenLink in die Zwischenablage kopiert!
With OpenShift Container Platform 4.20, you can now create and manage SR-IOV networks directly within your application namespaces. This new feature provides greater control over your network configurations and helps simplify your workflow.
Previously, creating an SR-IOV network required a cluster administrator to configure it for you. Now, you can manage these resources directly in your own namespace, which offers several key benefits:
-
Increased autonomy and control: You can now create your own
SriovNetwork
objects, removing the need to involve a cluster administrator for network configuration tasks. - Enhanced security: Managing resources within your own namespace improves security by providing better separation between applications and helps prevent unintentional misconfigurations.
- Simplified permissions: You can now simplify permissions and reduce operational overhead by using namespaced SR-IOV networks.
For more information, see Configuring namespaced SR-IOV resources.
1.3.14.10. Unnumbered BGP peering Link kopierenLink in die Zwischenablage kopiert!
With this release, OpenShift Container Platform includes unnumbered BGP peering. This was previously available as a Technology Preview feature. You can use the spec.interface
field of the BGP peer custom resource to configure unnumbered BGP peering.
For more information, see Configuring the integration of MetalLB and FRR-K8s .
1.3.14.11. High-availability for pod-level bonding on SR-IOV networks (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
This Technology Preview feature introduces the PF Status Relay Operator. The Operator uses Link Aggregation Control Protocol (LACP) as a health check to detect upstream switch failures, enabling high availability for workloads that use pod-level bonding with SR-IOV network virtual functions (VF).
Without this feature, an upstream switch can fail while the underlying physical function (PF) still reports an up
state. VFs attached to the PF also remain up, causing pods to send traffic to a dead endpoint and leading to packet loss.
The PF Status Relay Operator prevents this by monitoring the LACP status of the PF. When a failure is detected, the Operator forces the link state of the attached VFs down, triggering the pod’s bond to fail over to a backup path. This ensures the workload remains available and minimizes packet loss.
For more information, see High availability for pod-level bonds on SR-IOV networks.
1.3.14.12. Network policies for additional namespaces Link kopierenLink in die Zwischenablage kopiert!
With this release, OpenShift Container Platform deploys Kubernetes network policies to additional system namespaces to control ingress and egress traffic. It is anticipated that future releases might include network policies for additional system namespaces and Red Hat Operators.
1.3.14.13. Unassisted holdover for PTP devices (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
With this release, the PTP Operator provides unassisted holdover as a Technology Preview feature. When the upstream timing signal is lost, the PTP Operator automatically places PTP devices configured as either a boundary clock or a time slave clock into holdover mode. Automatic placement into holdover mode helps to maintain a continuous and stable time source for cluster nodes, minimizing time synchronization disruptions.
This feature is available only for nodes with Intel E810-XXVDA4T network interface cards.
For more information, see Configuring PTP devices.
1.3.15. Nodes Link kopierenLink in die Zwischenablage kopiert!
1.3.15.1. sigstore support is now generally available Link kopierenLink in die Zwischenablage kopiert!
Support for sigstore ClusterImagePolicy
and ImagePolicy
objects is now generally available. The API version is now config.openshift.io/v1
. For more information, see Manage secure signatures with sigstore.
The default openshift
cluster image policy is Technology Preview and is active only in clusters that have enabled Technology Preview features.
1.3.16. Support for sigstore bring your own PKI (BYOPKI) image validation Link kopierenLink in die Zwischenablage kopiert!
You can now use sigstore ClusterImagePolicy
and ImagePolicy
objects to generate BYOPKI config to the policy.json
file, enabling you to verify image signatures with BYOPKI. For more information, see About cluster and image policy parameters.
1.3.16.1. Linux user namespace support is now generally available Link kopierenLink in die Zwischenablage kopiert!
Support for deploying pods and containers into Linux user namespaces is now generally available and enabled by default. Running pods and containers in individual user namespaces can mitigate several vulnerabilities that a compromised container can pose to other pods and the node itself. This change also includes two new security context constraints, restricted-v3
and nested-container
, that are specifically designed for use with user namespaces. You can also configure the /proc
file system in pods as unmasked
. For more information, see Running pods in Linux user namespaces.
1.3.16.2. Adjust pod resource levels without pod disruption Link kopierenLink in die Zwischenablage kopiert!
By using the in-place pod resizing feature, you can apply a resize policy to change the CPU and memory resources for containers within a running pod without re-creating or restarting the pod. For more information, see Manually adjust pod resource levels.
1.3.16.3. Mounting an OCI image into a pod Link kopierenLink in die Zwischenablage kopiert!
You can you use an image volume to mount an Open Container Initiative (OCI)-compliant container image or artifact directly into a pod. For more information, see Mounting an OCI image into a pod.
1.3.16.4. Allocating specific GPUs to pods (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
You can now enable pods to request GPUs based on specific device attributes, such as product name, GPU memory capacity, compute capability, vendor name, and driver version. These attributes are exposed by the by using a third-party DRA resource driver that you install. For more information, see Allocating GPUs to pods.
1.3.17. OpenShift CLI (oc) Link kopierenLink in die Zwischenablage kopiert!
1.3.17.1. Introducing the oc adm upgrade recommend command (General Availability) Link kopierenLink in die Zwischenablage kopiert!
Formerly Technology Preview and now Generally Available, the oc adm upgrade recommend
command allows system administrators to perform a pre-update check on their OpenShift Container Platform clusters using the command line interface (CLI). The pre-update check helps identify potential issues, enabling users to address them before initiating an update. By running the precheck command and inspecting the output, users can prepare for updating their cluster and make informed decisions about when to start an update.
For more information, see Updating a cluster by using the CLI.
1.3.17.2. Introducing the oc adm upgrade status command (General Availability) Link kopierenLink in die Zwischenablage kopiert!
Formerly Technology Preview and now Generally Available, the oc adm upgrade status
command allows cluster administrators to get high-level summary information about the state of their OpenShift Container Platform cluster update using the command line interface (CLI). Three types of information are provided when you enter the command: control plane information, worker node information, and health insights.
The command is not currently supported on Hosted Control Plane (HCP) clusters.
For more information, see Updating a cluster by using the CLI.
1.3.17.3. oc-mirror v2 mirrors container images in environment variables of deployment templates Link kopierenLink in die Zwischenablage kopiert!
Operand images, dynamically deployed by Operator controllers at runtime, are typically referenced by environment variables within the controller’s deployment template.
Before OpenShift Container Platform 4.20, while oc-mirror
plugin v2 could access these environment variables, it attempted to mirror all values, including non-image references, for example, log levels, leading to failures. With this update, OpenShift Container Platform identifies and mirrors only the container images referenced in these environment variables.
For more information, see ImageSet configuration parameters for oc-mirror plugin v2.
1.3.18. Operator development Link kopierenLink in die Zwischenablage kopiert!
1.3.18.1. Supported Operator base images Link kopierenLink in die Zwischenablage kopiert!
With this release, the following base images for Operator projects are updated for compatibility with OpenShift Container Platform 4.20. The runtime functionality and configuration APIs for these base images are supported for bug fixes and for addressing CVEs.
- The base image for Ansible-based Operator projects
- The base image for Helm-based Operator projects
For more information, see Updating the base image for existing Ansible- or Helm-based Operator projects for OpenShift Container Platform 4.19 and later (Red Hat Knowledgebase).
1.3.19. Operator lifecycle Link kopierenLink in die Zwischenablage kopiert!
1.3.19.1. Red Hat Operator catalogs moved from OperatorHub to the software catalog in the console Link kopierenLink in die Zwischenablage kopiert!
With this release, the Red Hat-provided Operator catalogs have moved from OperatorHub to the software catalog and the Operators navigation item is renamed to Ecosystem in the console. The unified software catalog presents Operators, Helm charts, and other installable content in the same console view.
-
To access the Red Hat-provided Operator catalogs in the console, select Ecosystem
Software Catalog. -
To manage, update, and remove installed Operators, select Ecosystem
Installed Operators.
Currently, the console only supports managing Operators by using Operator Lifecycle Manager (OLM) Classic. If you want to use OLM v1 to install and manage cluster extensions, such as Operators, you must use the CLI.
To manage the default or custom catalog sources, you still interact with OperatorHub custom resource (CR) in the console or CLI.
1.3.20. Postinstallation configuration Link kopierenLink in die Zwischenablage kopiert!
1.3.20.1. Enabling Amazon Web Services Security Token Service (STS) on an existing cluster Link kopierenLink in die Zwischenablage kopiert!
With this release, you can configure your AWS OpenShift Container Platform cluster to use STS even if you did not do so during installation.
For more information, see Enabling AWS Security Token Service (STS) on an existing cluster.
1.3.21. Red Hat Enterprise Linux CoreOS (RHCOS) Link kopierenLink in die Zwischenablage kopiert!
1.3.21.1. Investigate kernel crashes with kdump (General Availability) Link kopierenLink in die Zwischenablage kopiert!
With this update, kdump
is now Generally Available for all supported architectures, including x86_64
, arm64
, s390x
, and ppc64le
. This enhancement enables users to diagnose and resolve kernel problems more efficiently.
1.3.21.2. Ignition update to version 2.20.0 Link kopierenLink in die Zwischenablage kopiert!
RHCOS introduces version 2.20.0 of Ignition. This enhancement supports partitioning disks with mounted partitions using the partx
utility, which is now included with dracut
module installations. Additionally, this update adds support for Proxmox Virtual Environment.
1.3.21.3. Butane update to version 0.23.0 Link kopierenLink in die Zwischenablage kopiert!
RHCOS now includes Butane version 0.23.0.
1.3.21.4. Afterburn update to version 5.7.0 Link kopierenLink in die Zwischenablage kopiert!
RHCOS now includes Afterburn version 5.7.0. This update adds support for Proxmox Virtual Environment.
1.3.21.5. coreos-installer update to version 0.23.0 Link kopierenLink in die Zwischenablage kopiert!
With this release, the coreos-installer
utility is updated to version 0.23.0.
1.3.22. Scalability and performance Link kopierenLink in die Zwischenablage kopiert!
1.3.22.1. Configuring NUMA-aware scheduler replicas and high availability (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform 4.20, the NUMA Resources Operator automatically enables high availability (HA) mode by default. In this mode, the NUMA Resources Operator creates one scheduler replica for each control-plane node in the cluster to ensure redundancy. This default behavior occurs if the spec.replicas
field is not specified in the NUMAResourcesScheduler
custom resource. Alternatively, you can explicitly set a specific number of scheduler replicas to override the default HA behavior or disable the scheduler entirely by setting the spec.replicas
field to 0
. The maximum number of replicas is 3, even if the number of control plane nodes exceeds 3.
For more information, see Managing high availability (HA) for the NUMA-aware scheduler.
1.3.22.2. NUMA Resources Operator now supports schedulable control plane nodes Link kopierenLink in die Zwischenablage kopiert!
With this release, the NUMA Resources Operator can now manage control plane nodes that are configured as schedulable. This capability allows you to deploy topology-aware workloads on control plane nodes, which is especially useful in resource-constrained environments like compact clusters.
This enhancement helps the NUMA Resources Operator schedule your NUMA-aware pods on the node with the most suitable NUMA topology, even on control plane nodes.
For more information, see NUMA Resources Operator support for schedulable control-plane nodes.
1.3.22.3. Receive Packet Steering (RPS) is now disabled by default Link kopierenLink in die Zwischenablage kopiert!
With this release, Receive Packet Steering (RPS) is no longer configured when Performance Profile is applied. The RPS configuration affects containers that perform networking system calls, such as send, directly within latency-sensitive threads. To avoid latency impacts when RPS is not configured, move networking calls to helper threads or processes.
The previous RPS configuration resolved latency issues at the expense of overall pod kernel networking performance. The current default configuration promotes transparency by requiring developers to address the underlying application design instead of obscuring performance impacts.
To revert to the previous behavior, add the performance.openshift.io/enable-rps
annotation to the PerformanceProfile manifest:
This action restores the prior functionality at the cost of globally reducing networking performance for all pods.
1.3.22.4. Performance tuning for worker nodes with Intel Sierra Forest CPUs Link kopierenLink in die Zwischenablage kopiert!
With this release, you can use the PerformanceProfile
custom resource to configure worker nodes on machines equipped with Intel Sierra Forest CPUs. These CPUs are supported when configured with a single NUMA domain (NPS=1).
1.3.22.5. Performance tuning for worker nodes with AMD Turin CPUs Link kopierenLink in die Zwischenablage kopiert!
With this release, you can use the PerformanceProfile
custom resource to configure worker nodes on machines equipped with AMD Turin CPUs. These CPUs are fully supported when configured with a single NUMA domain (NPS=1).
1.3.22.6. Hitless TLS certificate rotation for the Kubernetes API Link kopierenLink in die Zwischenablage kopiert!
This new feature enhances TLS certificate rotations in OpenShift Container Platform, ensuring 95% expected cluster availability. It is particularly beneficial for high-transaction-rate clusters and single-node OpenShift deployments, ensuring seamless operation even under heavy loads.
1.3.22.7. Additional cluster latency requirements for etcd Link kopierenLink in die Zwischenablage kopiert!
With this update, the etcd product documentation is updated to include additional requirements for reducing OpenShift Container Platform cluster latency. This update clarifies the prerequisites and setup procedures for using etcd, resulting in an improved user experience. As a result, this feature introduces support for Transport Layer Security (TLS) 1.3 in etcd, which enhances security and performance for data transmission, and enables etcd to comply with the latest security standards, reducing potential vulnerabilities. The improved encryption ensures more secure communication between etcd and its clients. For more information, see Cluster latency requirements for etcd.
1.3.23. Storage Link kopierenLink in die Zwischenablage kopiert!
1.3.23.1. NetworkPolicy support for the Secrets Store CSI Driver Operator Link kopierenLink in die Zwischenablage kopiert!
The Secrets Store CSI Driver Operator version 4.20 is now based on the upstream v1.5.2 release. The Secrets Store CSI Driver Operator now applies Kubernetes NetworkPolicy
objects during installation to restrict network communication to only the required components.
1.3.23.2. Volume populators are generally available Link kopierenLink in die Zwischenablage kopiert!
The volume populators feature allows you to create pre-populated volumes.
OpenShift Container Platform 4.20 introduces a new field dataSourceRef
for volume populator functionality that expands the objects that can be used as a data source for pre-population of volumes, from only persistent volume claims (PVC) and snapshots, to any appropriate custom resource (CR).
OpenShift Container Platform now ships volume-data-source-validator
, which reports events on PVCs that use a volume populator without a corresponding VolumePopulator
instance. Previous OpenShift Container Platform versions did not require VolumePopulator
instances, so if you are upgrading from 4.12, or later, you might receive events about unregistered populators. If you installed volume-data-source-validator
yourself previously, you can remove your version.
The volume populators feature, which was introduced in OpenShift Container Platform 4.12 as a Technology Preview feature, is now supported as generally available.
Volume population is enabled by default. However, OpenShift Container Platform does not ship with any volume populators.
For more information about volume populators, see Volume populators.
1.3.23.3. Performance plus for Azure Disk is generally available Link kopierenLink in die Zwischenablage kopiert!
By enabling performance plus, the input/output operations per second (IOPS) and throughput limits can be increased for the following types of disks that are 513 GiB, and larger:
- Azure Premium solid-state drives (SSD)
- Standard SSDs
- Standard hard disk drives (HDD)
This feature is generally available in OpenShift Container Platform 4.20.
For more information about performance plus, see Performance plus for Azure Disk.
1.3.23.4. Changed block tracking (Developer Preview) Link kopierenLink in die Zwischenablage kopiert!
Changed block tracking enables efficient and incremental backups and disaster recovery for persistent volumes (PVs) managed by Container Storage Interface (CSI) drivers that support this feature.
Changed block tracking allows consumers to requests a list of blocks that have changed between two snapshots, which is useful for backup solutions vendors. By only backing up changed blocks, rather than entire volumes, back up processes are more efficient.
Changed block tracking is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
For more information about changed block tracking, see this KB article.
1.3.23.5. AWS EFS One Zone volume support is generally available Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform 4.20 introduces AWS Elastic File Storage (EFS) One Zone volume support as generally available. With this feature, if file system Domain Name System (DNS) resolution fails, the EFS CSI driver can fall back to mount targets. A mount target serves as a network endpoint that allows AWS EC2 instances or other AWS compute instances within a Virtual Private Cloud (VPC) to connect to, and mount, an EFS file system.
For more information about One Zone, see Support for One Zone.
1.3.23.6. Configuring fsGroupChangePolicy and seLinuxChangePolicy at namespace and pod level Link kopierenLink in die Zwischenablage kopiert!
Certain operations of a volume can cause pod startup delays, which might cause pod timeouts.
fsGroup: For volumes with many files, pod startup timeouts can occur because, by default, OpenShift Container Platform recursively changes ownership and permissions for the contents of each volume to match the fsGroup
specified in a pod’s securityContext
when that volume is mounted. This can be time consuming, slowing pod startup. You can use the fsGroupChangePolicy
parameter inside a securityContext
to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume.
Changing this parameter at the pod level was introduced in OpenShift Container Platform 4.10. In 4.20, you can set this parameter at the namespace level, in addition to the pod level, as a generally available feature.
SELinux: SELinux (Security-Enhanced Linux) is a security mechanism that assigns security labels (contexts) to all objects (files, processes, network ports, etc.) on a system. These labels determine what a process can access. When a pod starts, the container runtime recursively relabels all files on a volume to match a pod’s SELinux context. For volumes with a lot of files, this can significantly increase pod startup times. Mount option specifies avoiding recursive relabeling of all files by attempting to mount the volume with the correct SELinux label directly using the -o context mount option, thus helping to avoid pod timeout problems.
RWOP and SELinux mount option: ReadWriteOncePod (RWOP) persistent volumes use the SELinux mount feature by default. Mount option was introduced in OpenShift Container Platform 4.15 as a Technology Preview feature, and became generally available in 4.16.
RWO and RWX and SELinux mount option: ReadWriteOnce (RWO) and ReadWriteMany (RWX) volumes use recursive relabeling by default. Mount option for RWO/RWX was introduced in OpenShift Container Platform 4.17 as a Developer Preview feature, but is now supported in 4.20 as a Technology Preview feature.
In a future OpenShift Container Platform version, RWO and RWX volumes will use mount option by default.
To assist you with the upcoming move to the mount option default, OpenShift Container Platform 4.20 reports SELinux-related conflicts when creating pods, and on running pods, to make you aware of potential conflicts, and to help you resolve them. For more information about this reporting, see this KB article.
If you are unable to resolve the SELinux-related conflicts, you can proactively opt-out of the future move to mount option as default for selected pods or namespaces.
In OpenShift Container Platform 4.20, you can evaluate the mount option feature for RWO and RWX volumes as a Technology Preview feature.
RWO/RWX SELinux mount is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
For more information about fsGroup, see Reducing pod timeouts using fsGroup.
For more information about SELinux, see Reducing pod timeouts using seLinuxChangePolicy.
1.3.23.7. Always honor persistent volume reclaim policy is generally available Link kopierenLink in die Zwischenablage kopiert!
Before OpenShift Container Platform 4.18, the persistent volume (PV) reclaim policy was not always applied.
For a bound PV and persistent volume claim (PVC) pair, the ordering of PV-PVC deletion determined whether the PV delete reclaim policy was applied or not. The PV applied the reclaim policy if the PVC was deleted before deleting the PV. However, if the PV was deleted before deleting the PVC, then the reclaim policy was not applied. As a result of that behavior, the associated storage asset in the external infrastructure was not removed.
Starting with OpenShift Container Platform 4.18, the PV reclaim policy is consistently always applied as a Technical Preview feature. With OpenShift Container Platform 4.20, this feature is generally available.
For more information, see Reclaim policy for persistent volumes.
1.3.23.8. Manila CSI driver allows multiple CIDRs when creating NFS volumes is generally available Link kopierenLink in die Zwischenablage kopiert!
By default, OpenShift Container Platform creates Manila storage classes that provide access to all IPv4 clients, with the possibility of updating it to a single IP address or subnet. In OpenShift Container Platform 4.20, you can limit client access by defining custom storage classes that use multiple client IP addresses or subnets by using the nfs-ShareClient
parameter.
This feature is generally available in OpenShift Container Platform 4.20.
For more information, see Customizing Manila share access rules.
1.3.23.9. AWS EFS cross account procedure revision Link kopierenLink in die Zwischenablage kopiert!
To enhance usability and provide both Security Token Service (STS) and non-STS support, the Amazone Web Serivces (AWS) Elastic File Service (EFS) cross account support procedure has been revised.
To view the revised procedure, see AWS EFS cross account support.
1.3.24. Web console Link kopierenLink in die Zwischenablage kopiert!
1.3.24.1. Support for custom application icons in the Import flow Link kopierenLink in die Zwischenablage kopiert!
Before this update, the Container image form flow provided only a limited set of predefined icons for applications.
With this update, you can add custom icons when you import applications through the Container image form. For existing applications, apply the app.openshift.io/custom-icon
annotation to add a custom icon to the corresponding Topology node.
As a result, you can better identify applications in the Topology view and organize your projects more clearly.
1.4. Notable technical changes Link kopierenLink in die Zwischenablage kopiert!
1.4.1. MachineOSConfig naming changes Link kopierenLink in die Zwischenablage kopiert!
The name of the MachineOSConfig
object used with on-cluster image mode must now be the same as the machine config pool where you want to deploy the custom layered image. Previously, you could use any name. This change was made to prevent attempts to use multiple MachineOSConfig
objects with each machine config pool.
1.4.2. oc-mirror plugin v2 verifies credentials and certificates before mirroring operations Link kopierenLink in die Zwischenablage kopiert!
With this update, the oc-mirror plugin v2 now verifies information such as registry credentials, DNS name, and SSL certificates before populating the cache and beginning mirroring operations. This prevents users from discovering certain problems only after the cache is populated and mirroring has begun.
1.5. Deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
1.5.1. Images deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Cluster Samples Operator | Deprecated | Deprecated | Deprecated |
1.5.2. Installation deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
| Deprecated | Deprecated | Deprecated |
CoreDNS wildcard queries for the | Deprecated | Deprecated | Deprecated |
| Deprecated | Deprecated | Deprecated |
| Deprecated | Deprecated | Deprecated |
| Deprecated | Deprecated | Deprecated |
Package-based RHEL compute machines | Deprecated | Removed | Removed |
| Deprecated | Deprecated | Deprecated |
Installing a cluster on AWS with compute nodes in AWS Outposts | Deprecated | Deprecated | Deprecated |
1.5.3. Machine Management deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Confidential Computing with AMD Secure Encrypted Virtualization for Google Cloud | General Availability | General Availability | Deprecated |
1.5.4. Networking deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
iptables | Deprecated | Deprecated | Deprecated |
1.5.5. Node deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
| Deprecated | Deprecated | Deprecated |
Kubernetes topology label | Deprecated | Deprecated | Deprecated |
Kubernetes topology label | Deprecated | Deprecated | Deprecated |
cgroup v1 | Deprecated | Removed | Removed |
1.5.6. OpenShift CLI (oc) deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
oc-mirror plugin v1 | Deprecated | Deprecated | Deprecated |
Docker v2 registries | General Availability | General Availability | Deprecated |
1.5.7. Operator lifecycle and development deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Operator SDK | Deprecated | Removed | Removed |
Scaffolding tools for Ansible-based Operator projects | Deprecated | Removed | Removed |
Scaffolding tools for Helm-based Operator projects | Deprecated | Removed | Removed |
Scaffolding tools for Go-based Operator projects | Deprecated | Removed | Removed |
Scaffolding tools for Hybrid Helm-based Operator projects | Removed | Removed | Removed |
Scaffolding tools for Java-based Operator projects | Removed | Removed | Removed |
SQLite database format for Operator catalogs | Deprecated | Deprecated | Deprecated |
1.5.8. Storage deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Shared Resources CSI Driver Operator | Removed | Removed | Removed |
1.5.9. Web console deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
| General Availability | Deprecated | Deprecated |
Patternfly 4 | Deprecated | Removed | Removed |
1.5.10. Workloads deprecated and removed features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
| Deprecated | Deprecated | Deprecated |
1.5.11. Deprecated features Link kopierenLink in die Zwischenablage kopiert!
1.5.11.1. Deprecation of AMD Secure Encrypted Virtualization Link kopierenLink in die Zwischenablage kopiert!
The use of Confidential Computing with AMD Secure Encrypted Virtualization (AMD SEV) on Google Cloud has been deprecated and might be removed in a future release.
You can use AMD Secure Encrypted Virtualization Secure Nested Paging (AMD SEV-SNP) instead.
1.5.11.2. Docker v2 registries deprecated Link kopierenLink in die Zwischenablage kopiert!
Support for Docker v2 registries is deprecated and is planned for removal in a future release. A registry that supports the Open Container Initiative (OCI) specification will be required for all mirroring operations in a future release. Additionally, oc-mirror
v2 now only generates custom catalog images in the OCI format, whereas the deprecated oc-mirror
v1 still supports the Docker v2 format.
1.5.11.3. Red Hat Marketplace is deprecated Link kopierenLink in die Zwischenablage kopiert!
The Red Hat Marketplace is deprecated. Customers who use the partner software from the Marketplace should contact the software vendor about how to migrate from the Marketplace Operator to an Operator in the Red Hat Ecosystem Catalog. It is expected that the Marketplace index will be removed in an upcoming OpenShift Container Platform release. For more information, see Sunset of the Red Hat Marketplace, operated by IBM.
1.5.12. Removed features Link kopierenLink in die Zwischenablage kopiert!
1.5.12.1. Removed Kubernetes APIs Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform 4.20 removed the following Kubernetes APIs. You must migrate your manifests, automation, and API clients to use the new, supported API versions before updating to 4.20. For more information about migrating removed APIs, see the Kubernetes documentation.
Resource | Removed API | Migrate to | Notable changes |
---|---|---|---|
|
|
| |
|
|
| |
|
|
| |
|
|
|
1.6. Bug fixes Link kopierenLink in die Zwischenablage kopiert!
1.6.1. Bare Metal Hardware Provisioning Link kopierenLink in die Zwischenablage kopiert!
- Before this update, when installing a dual-stack cluster on bare metal by using installer-provisioned infrastructure, the installation failed because the Virtual Media URL was IPv4 instead of IPv6. As IPv4 was unreachable, the bootstrap failed on the virtual machine (VM) and cluster nodes were not created. With this release, when you install a dual-stack cluster on bare metal for installer-provisioned infrastructure, the dual-stack cluster uses the Virtual Media URL IPv6 and the issue is resolved. (OCPBUGS-60240)
- Before this update, when installing a cluster with the bare metal as a service (BMaaS) API, an ambiguous validation error was reported. When you set an image URL without a checksum, BMaaS failed to validate the deployment image source information. With this release, when you do not provide a required checksum for an image, a clear message is reported. (OCPBUGS-57472)
-
Before this update, when installing a cluster using bare metal, if cleaning was not disabled, the hardware tried to delete any Software RAID configuration before it ran the
coreos-installer
tool. With this release, the issue is resolved. (OCPBUGS-56029) -
Before this update, by using a Redfish system ID, such as
redfish://host/redfish/v1/
instead ofredfish://host/redfish/v1/Self
, in a Baseboard Management Console (BMC) URL, a registration error about an invalid JSON was reported. This issue was caused by a bug in the Bare Metal Operator (BMO). With this release, BMO now handles URLs without a Redfish system ID as a valid address without causing a JSON parsing issue. This fix improves the software handling of a missing Redfish system ID in BMC URLs. (OCPBUGS-55717) -
Before this update, virtual media boot attempts sometimes failed because some models of SuperMicro such as
ars-111gl-nhr
used a different virtual media device string than other SuperMicro machines. With this release, an extra conditional check is added to sushy library code to check for the specific model affected and to adjust its behavior. As a result, Supermicroars-111gl-nhr
can boot from virtual media. (OCPBUGS-55434) - Before this update, RAM Disk logs did not include clear file separators, which occasionally caused the content to overlap on a single line. As a consequence, users could not parse RAM Disk logs. With this release, RAM Disk logs include clear file headers to indicate the boundary between the content of each file. As a result, the readability of RAM Disk logs for users is improved. (OCPBUGS-55381)
-
Before this update, during Ironic Python Agent (IPA) deployments, the RAM disk logs in the
metal3-ramdisk-logs
container did not includeNetworkManager
logs. The absence ofNetworkManager
logs hindered effective debugging, which affected network issue resolution. With this release, the existing RAM disk logs in themetal3-ramdisk-logs
container of a metal3 pod include the entire journal from the host rather than just thedmesg
and IPA logs. As result, IPA logs provide comprehensiveNetworkManager
data for improved debugging. (OCPBUGS-55350) - Before this update, when the provisioning network was disabled in the cluster configuration, you could create a bare-metal host with a driver that required a network boot, for example Intelligent Platform Management Interface (IPMI) or Redfish without virtual media. As a result, boot failures occurred during inspection or provisioning because the correct DHCP options could not be identified. With this release, when you create a bare-metal host in this scenario the host fails to register and the reported error references the disabled provisioning network. To create the host, you must enable the provisioning network or use a virtual-media-based driver, for example, Redfish virtual media. (OCPBUGS-54965)
1.6.2. Cloud Compute Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, AWS compute machine sets could include a null value for the
userDataSecret
parameter. Using a null value sometimes caused machines to get stuck in theProvisioning
state. With this release, theuserDataSecret
parameter requires a value. (OCPBUGS-55135) -
Before this update, OpenShift Container Platform clusters on AWS that were created with version 4.13 or earlier could not update to version 4.19. Clusters that were created with version 4.14 and later have an AWS
cloud-conf
ConfigMap by default, and this ConfigMap is required starting in OpenShift Container Platform 4.19. With this release, the Cloud Controller Manager Operator creates a defaultcloud-conf
ConfigMap when none is present on the cluster. This change enables clusters that were created with version 4.13 or earlier to update to version 4.19. (OCPBUGS-59251) -
Before this update, a
failed to find machine for node …
appeared in the logs when theInternalDNS
address for a machine was not set as expected. As a consequence, the user might interpret this error as the machine not existing. With this release, the log message readsfailed to find machine with InternalDNS matching …
. As a result, the user has a clearer indication of why the match is failing. (OCPBUGS-19856) - Before this update, a bug fix altered the availability set configuration by changing the fault domain count to use the maximum available value instead of being fixed at 2. This inadvertently caused scaling issues for compute machine sets that were created prior to the bug fix, because the controller attempted to modify immutable availability sets. With this release, availability sets are no longer modified after creation, allowing affected compute machine sets to scale properly. (OCPBUGS-56380)
-
Before this update, compute machine sets migrating from the Cluster API to the Machine API got stuck in the
Migrating
state. As a consequence, the compute machine set could not finish transitioning to use a different authoritative API or perform further reconciliation of theMachineSet
object status. With this release, the migration controllers watch for changes in Cluster API resources and react to authoritative API transitions. As a result, compute machine sets successfully transition from the Cluster API to the Machine API. (OCPBUGS-56487) -
Before this update, for the
maxUnhealthy
field in theMachineHealthCheck
custom resource definition (CRD), it did not document the default value. With this release, the CRD documents the default value. (OCPBUGS-61314) -
Before this update, it was possible to specify the use of the
CapacityReservationsOnly
capacity reservation behavior and Spot Instances in the same machine template. As a consequence, machines with these two incompatible settings were created. With this release, validation of machine templates ensures that these two incompatible settings are not used in the same machine template. As a result, machines with these two incompatible settings cannot be created. (OCPBUGS-60943) - Before this update, on clusters that support migrating Machine API resources to Cluster API resources, deleting a nonauthoritative machine did not delete the corresponding authoritative machine. As a consequence, orphaned machines that should have been cleaned up remained on the cluster and could cause a resource leak. With this release, deleting a nonauthoritative machine triggers propagation of the deletion to the corresponding authoritative machine. As a result, deletion requests on nonauthoritative machine correctly cascade, preventing orphaned authoritative machines and ensuring consistency in machine cleanup. (OCPBUGS-55985)
-
Before this update, on clusters that support migrating Machine API resources to Cluster API resources, the Cluster CAPI Operator could create an authoritative Cluster API compute machine set in the
Paused
state. As a consequence, the newly created Cluster API compute machine set could not reconcile or scale machines even though it was using the authoritative API. With this release, the Operator now ensures that Cluster API compute machine sets are created in an unpaused state when the Cluster API is authoritative. As a result, newly created Cluster API compute machine sets are reconciled immediately and scaling and machine lifecycle operations proceed as intended when the Cluster API is authoritative. (OCPBUGS-56604) - Before this update, scaling large numbers of nodes was slow because scaling requires reconciling each machine several times and each machine was reconciled individually. With this release, up to ten machines can be reconciled concurrently. This change improves the processing speed for machines during scaling. (OCPBUGS-59376)
- Before this update, the Cluster CAPI Operator status controller used an unsorted list of related objects, leading to status updates when there were no functional changes. As a consequence, users would see significant noise in the Cluster CAPI Operator object and in logs due to continuous and unnecessary status updates. With this release, the status controller logic sorts the list of related objects before comparing them for changes. As a result, a status update only occurs when there is a change to the Operator’s state. (OCPBUGS-56805, OCPBUGS-58880)
-
Before this update, the
config-sync-controller
component of the Cloud Controller Manager Operator did not display logs. The issue is resolved in this release. (OCPBUGS-56508) - Before this update, the Control Plane Machine Set configuration used availability zones from compute machine sets. This is not a valid configuration. As a consequence, the Control Plane Machine Set could not be generated when the control plane machines were in a single zone while compute machine sets spanned multiple zones. With this release, the Control Plane Machine Set derives an availability zone configuration from existing control plane machines. As a result, the Control Plane Machine Set generates a valid zone configuration that accurately reflects the current control plane machines. (OCPBUGS-52448)
-
Before this update, the controller that annotates a Machine API compute machine set did not check whether the Machine API was authoritative before adding scale-from-zero annotations. As a consequence, the controller repeatedly added these annotations and caused a loop of continuous changes to the
MachineSet
object. With this release, the controller checks the value of theauthoritativeAPI
field before adding scale-from-zero annotations. As a result, the controller avoids the looping behavior by only adding these annotations to a Machine API compute machine set when the Machine API is authoritative. (OCPBUGS-57581) -
Before this update, the Machine API Operator attempted to reconcile
Machine
resources on platforms other than AWS where the.status.authoritativeAPI
field was not populated. As a consequence, compute machines remained in theProvisioning
state indefinitely and never became operational. With this release, the Machine API Operator now populates the empty.status.authoritativeAPI
field with the corresponding value in the machine specification. A guard is also added to the controllers to handle cases where this field might still be empty. As a result,Machine
andMachineSet
resources are reconciled properly and compute machines no longer remain in theProvisioning
state indefinitely. (OCPBUGS-56849) - Before this update, the Machine API Provider Azure used an old version of the Azure SDK, which used an old API version that did not support referencing a Capacity Reservation group. As a consequence, creating a Machine API machine that referenced a Capacity Reservation group in another subscription resulted in an Azure API error. With this release, the Machine API Provider Azure uses a version of the Azure SDK that supports this configuration. As a result, creating a Machine API machine that references a Capacity Reservation group in another subscription works as expected. (OCPBUGS-55372)
- Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not correctly compare the machine specification when converting an authoritative Cluster API machine template to a Machine API machine set. As a consequence, changes to the Cluster API machine template specification were not synchronized to the Machine API machine set. With this release, changes to the comparison logic resolve the issue. As a result, the Machine API machine set synchronizes correctly after the Cluster API machine set references the new Cluster API machine template. (OCPBUGS-56010)
-
Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not delete the machine template when its corresponding Machine API machine set was deleted. As a consequence, unneeded Cluster API machine templates persisted in the cluster and cluttered the
openshift-cluster-api
namespace. With this release, the two-way synchronization controller correctly handles deletion synchronization for the machine template. As a result, deleting a Machine API authoritative machine set deletes the corresponding Cluster API machine template. (OCPBUGS-57195) - Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources prematurely reported a successful migration. As a consequence, if any errors occurred when updating the status of related objects, the operation was not retried. With this release, the controller ensures that all related object statuses are written before reporting a successful status. As a result, the controller handles errors during migration better. (OCPBUGS-57040)
1.6.3. Cloud Credential Operator Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the
ccoctl
command unnecessarily required thebaseDomainResourceGroupName
parameter when creating the OpenID Connect (OIDC) issuer and managed identities for a private cluster by using Microsoft Entra Workload ID. As a consequence, an error displayed whenccoctl
tried to create private clusters. With this release, thebaseDomainResourceGroupName
parameter is removed as a requirement. As a result, the process for creating a private cluster on Microsoft Azure is logical and consistent with expectations. (OCPBUGS-34993)
1.6.4. Cluster Autoscaler Link kopierenLink in die Zwischenablage kopiert!
- Before this update, the cluster autoscaler attempted to include machine objects that were in a deleting state. As a consequence, the cluster autoscaler count of machines was inaccurate. This issue caused the cluster autoscaler to add additional taints that were not needed. With this release, the autoscaler accurately counts the machines. (OCPBUGS-60035)
-
Before this update, when you created a cluster autoscaler object with the Cluster Autoscaler Operator enabled in the cluster, two
cluster-autoscaler-default
pods in theopenshift-machine-api
were sometimes created at the same time and one of the pods was immediately killed. With this release, only one pod is created. (OCPBUGS-57041)
1.6.5. Cluster Version Operator Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the status of the
ClusterVersion
condition could incorrectly showImplicitlyEnabled
instead ofImplicitlyEnabledCapabilities
. With this release, theClusterVersion
condition type is fixed and changed fromImplicitlyEnabled
toImplicitlyEnabledCapabilities
. (OCPBUGS-56114)
1.6.6. config-operator Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the cluster incorrectly switched to the
CustomNoUpgrade
state without the correctfeatureGate
configuration. As a consequence, emptyfeatureGates
and subsequent controller panics occurred. With this release, thefeatureGate
configuration for theCustomNoUpgrade
cluster state matches the default which prevents emptyfeatureGates
and subsequent controller panics. (OCPBUGS-57187)
1.6.7. Dev Console Link kopierenLink in die Zwischenablage kopiert!
- Before this update, some entries on the Quick Starts page displayed duplicate link buttons. With this update, the duplicates are removed, and the link buttons are correctly displayed. (OCPBUGS-60373)
- Before this update, the onboarding modal that displayed when you first logged in was missing visuals and images, which made the modal messaging unclear. With this release, the missing elements are added to the modal. As a result, the onboarding experience provides complete visuals consistent with the overall console design. (OCPBUGS-57392)
- Before this update, importing multiple files in the YAML editor copied the existing content and appended the new file, which created duplicates. With this release, the import behavior is fixed. As a result, the YAML editor displays only the new file content without duplication. (OCPBUGS-45297)
-
Before this update, the status of the
ClusterVersion
condition could incorrectly showImplicitlyEnabled
instead ofImplicitlyEnabledCapabilities
. With this release, theClusterVersion
condition type is fixed and changed fromImplicitlyEnabled
toImplicitlyEnabledCapabilities
. (OCPBUGS-56114)
1.6.8. etcd Link kopierenLink in die Zwischenablage kopiert!
- Before this update, the timeout on one etcd member caused context deadlines to exceed. As a consequence, all members were declared unhealthy, even though some were reachable. With this release, if one member times out, other members are no longer incorrectly marked as unhealthy. (OCPBUGS-60941)
- Before this update, when you deployed single-node OpenShift with many IPs on the primary interface, the IP in the etcd certificate mismatched with the IP in the config map that the API server used to connect to etcd. As a consequence, the API server pod failed during single-node OpenShift deployment, which caused cluster initialization issues. With this release, the single IP in the etcd config map matches the IP in the certificate for single-node OpenShift deployments. As a result, the API server connects to etcd by using the correct IP included in the etcd certificate, which prevents pod failure during cluster initialization. (OCPBUGS-55404)
-
Before this update, during temporary downtime of the API server, the Cluster etcd Operator reported incorrect information, such as messages that the
openshift-etcd
namespace was non-existent. With this update, the Cluster etcd Operator status message correctly indicates API server unavailability instead of suggesting the absence of theopenshift-etcd
namespace. As a result, the Cluster etcd Operator status accurately reflects the presence of theopenshift-etcd
namespace, enhancing system reliability. (OCPBUGS-44570)
1.6.9. Extensions (OLM v1) Link kopierenLink in die Zwischenablage kopiert!
- Before this update, the preflight custom resource definition (CRD) safety check in OLM v1 blocked updates if it detected changes in the description fields of a CRD. With this update, the preflight CRD safety check does not block updates when there are changes to documentation fields. (OCPBUGS-55051)
-
Before this update, the catalogd and Operator Controller components did not display the correct version and commit information in the OpenShift CLI (
oc
). With this update, the correct commit and version information is displayed. (OCPBUGS-23055)
1.6.10. Installer Link kopierenLink in die Zwischenablage kopiert!
- Before this update, when you installed a Konflux-built cluster on IBM Power® Virtual Server, the installation could fail due to errors in semantic versioning (SemVer) parsing. With this release, the parsing issue has been resolved so that the installation can continue successfully. (OCPBUGS-61120)
- Before this update, when you installed a cluster on Azure Stack Hub with a user-provisioned infrastructure, the API and API-int load balancers could fail to be created. As a consequence, the installation failed. With this release, the user-provisioned infrastructure templates is updated so that the load balancers are created. As a result, installation is successful. (OCPBUGS-60545)
-
Before this update, when you installed a cluster on Google Cloud, the installation program read and processed the
install-config.yaml
file even when an unrecoverable error was reported about not finding a matching public DNS zone. This error was due to an invalidbaseDomain
parameter. As a consequence, cluster administrators recreated theinstall-config.yaml
file unnecessarily. With this release, when the installation program reports this error the installation progam does not read and process theinstall-config.yaml
file. (OCPBUGS-59430) - Before this update, IBM Cloud was omitted from the list of platforms that supported single-node OpenShift installation in the validation code. As a consequence, users could not install a single-node configuration on IBM Cloud because of a validation error. With this release, IBM Cloud support for single-node installations is enabled. As a result, users can complete single-node installations on IBM Cloud. (OCPBUGS-59220)
-
Before this update, installing single-node OpenShift on
platform: None
with user-provisioned infrastructure was not supported, which led to installation failures. With this release, single-node OpenShift installation onplatform: None
is supported. (OCPBUGS-58216) -
Before this update, when you installed OpenShift Container Platform on Amazon Web Services (AWS), the Machine Config Operator (MCO) for disabling boot image management failed to check edge compute machine pools. When determining whether to disable boot image management, the installation progream only checked the first compute machine pool entry in the
install-config.yaml
. As a consequence, when you specified multiple compute pools but only the second had a custom Amazon Machine Image (AMI), the installation program did not disable MCO boot image management and the MCO could overwrite the custom AMI. With this release, the installation program checks all edge compute machine pools for custom images. As a result, boot image management is disabled when a custom image is specified in any machine pool. (OCPBUGS-57803) -
Before this update, the Agent-based Installer set the permissions for the etcd directory
/var/lib/etcd/member
as0755
when using an single-node OpenShift deployment instead of0700
, which is correctly set on a multi-node deployment. With this release, the etcd directory/var/lib/etcd/member
permissions are set to0700
for single-node OpenShift deployments. (OCPBUGS-57201) -
Before this update, when you used the Agent-based Installer, pressing the TAB key immediately after escaping the Network Manager Text User Interface (TUI) sometimes failed to register, which caused the cursor to remain on
Configure Network
instead of moving toQuit
. As a consequence, you were not able to quit the agent console application that verifies whether the current host can retrieve release images. With this release, the TAB key is always registered. (OCPBUGS-56934) - Before this update, when you used the Agent-based Installer, exiting the NetworkManager TUI would sometimes result in a blank screen, rather than displaying an error or proceeding with the installation. With this update, the blank screen is not displayed. (OCPBUGS-56880)
-
Before this update, installing a cluster on VMware vSphere failed when the API VIP and the ingress VIP used one load balancer IP address. With this release, the API VIP and the ingress VIP are now distinct in
machineNetworks
and the issue is resolved. (OCPBUGS-56601) -
Before this update, when you use the Agent-based Installer, setting the
additionalTrustBundlePolicy
field would have no effect. As a consequence, other overrides such thefips
parameter were ignored. With this update, theadditionalTrustBundlePolicy
parameter is correctly imported and other overrides are not ignored. (OCPBUGS-56596) - Before this update, the lack of detailed logging in the cluster destroy logic for VMware vSphere meant it was unclear why virtual machines (VMs) were not properly removed. Additionally, missing power state information could cause the destroy operation to enter an infinite loop. With this update, logging for the destroy operation is enhanced to indicate when specific cleanup actions begin, include vCenter names, and display a warning if the operation fails to find VMs. As a result, the destroy process provides detailed, actionable logs. (OCPBUGS-56262)
- Before this update, when you used the Agent-based Installer to install a cluster in a disconnected environment, exiting the NetworkManager Text User Interface (TUI) returned you to the agent console application that checks whether release images can be pulled from a registry. With this update, you are not returned to the agent console application when you exit the NetworkManager TUI. (OCPBUGS-56223)
- Before this update, the Agent-based Installer did not validate the values used to enable disk encryption, which potentially prevented disk encryption from being enabled. With this release, validation for correct disk encryption values is performed during image creation. (OCPBUGS-54885)
- Before this update, the resources containing the configuration for vSphere connection could get broken due to a mismatch between the UI and API. With this release, the UI uses the updated API definition. (OCPBUGS-54434)
-
Before this update, when you used the Agent-based Installer, some validation checks for the
hostPrefix
parameter were not performed when generating the ISO image. As a consequence, invalidhostPrefix
values were detected only when users failed to boot using the ISO. With this update, these validation checks are performed during ISO generation and causes an immediate failure. (OCPBUGS-53473) - Before this update, some systemd services in the Agent-based Installer continued to run after being stopped, which caused confusing log messages during cluster installation. With this update, these services are correctly stopped. (OCPBUGS-53107)
- Before this update, if the proxy configuration for an Microsoft Azure cluster was deleted while installing a cluster, the program reported an unreadable error and the proxy connection timed out. With this release, when the proxy configuration for the cluster is deleted while installing a cluster, the program reports a readable error message and the issue is resolved. (OCPBUGS-45805)
-
Before this update, after an installation was completed, the
kubeconfig
file generated by the Agent-based Installer did not contain the ingress router certificate authority (CA). With this release, thekubeconfig
file contains the ingress router CA upon the completion of a cluster installation. (OCPBUGS-45256) - Before this update, the Agent-based Installer announced a complete cluster installation without first checking whether Operators were in a stable state. Consequently, messages about a completed installation might have appeared even if there were still issues with any of the Operators. With this release, the Agent-based Installer waits until Operators are in a stable state before declaring the cluster installation to be complete. (OCPBUGS-18658)
- Before this update, the installation program did not prevent you from attempting to install single-node OpenShift on bare metal on the installer-provisioned infrastructure. As a consequence, the installation failed because it was not supported. With this release, OpenShift Container Platform prevents single-node OpenShift cluster installations on unsupported platforms. (OCPBUGS-6508)
1.6.11. Kube Controller Manager Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the
cluster-policy-controller
was crashing when an invalid volume type was provided. With this release, the code no longer panics. As a result, thecluster-policy-controller
logs an error to inform about invalidity of a volume type. (OCPBUGS-62053) -
Before this update, the
cluster-policy-controller
container was exposing the10357
port for all networks (the bind address was set to 0.0.0.0). The port was exposed outside the node’s host network because the KCM pod manifest set 'hostNetwork` totrue
. This port is used solely for the container’s probe. With this enhancement, the bind address was updated to listen on the localhost only. As result, the node security is improved because the port is not exposed outside the node network. (OCPBUGS-53290)
1.6.12. Kubernetes API Server Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, concurrent map iteration and kube-apiserver validation caused crashes. As a consequence, API server disruptions and
list watch
storms occurred. With this release, the concurrent map iteration and validation issue is resolved. As a result, API server crashes are prevented, and cluster stability is improved. (OCPBUGS-61347) -
Before this update, the resource quantity and
IntOrString
fields validation cost were incorrectly calculated due to improper consideration of maximum field length in the Common Expression Language (CEL) validation. As a consequence, users encountered validation errors due to incorrect string length consideration in CEL validation. With this release, CEL validation correctly accounts for the maximum length ofIntOrString fields
. As a result, users can submit valid resource requests without CEL validation errors. (OCPBUGS-59756) -
Before this update, the
node-system-admin-signer
validity was limited to one year and was not extended or refreshed at 2.5 years. This issue prevented issuing thenode-system-admin-client
for two years. With this release, thenode-system-admin-signer
validity is extended to three years, and issuing thenode-system-admin-client
for a two-year period is enabled. (OCPBUGS-59527) -
Before this update, a cluster installation failure occurred on IBM and Microsoft Azure systems due to incompatibility with the
ShortCertRotation
feature gate. As a consequence, the cluster installation failed, and caused nodes to remain offline. With this release, the fix removes theShortCertRotation
feature gate during a cluster installation on IBM and Microsoft Azure systems. As a result, cluster installations are successful on these platforms. (OCPBUGS-57202) -
Before this update, the
admissionregistration.k8s.io/v1beta1
API was served incorrectly in OpenShift Container Platform version 4.17, despite being intended for deprecation and removal. This led to dependency issues for users. With this release, the deprecated API filter is registered for a phased removal, and requires administrative acknowledgment for upgrades. As a result, users do not encounter deprecated API errors in OpenShift Container Platform version 4.20, and the system stability is improved. (OCPBUGS-55465) - Before this update, the certificate rotation controller copied and rewrote all of their changes, and caused excessive event spamming. As a consequence, users experienced excessive event spamming and potential etcd overload. With this release, the certificate rotation controller conflict is resolved, and reduces excessive event spamming. As a result, excessive event spamming in the certificate rotation controller is resolved, reduces the load on etcd, and improves the system stability.(OCPBUGS-55217)
-
Before this update, user secrets were logged in audit logs after enabling
WriteRequestBodies
profile settings. As a consequence, sensitive data was visible in the audit log. With this release, theMachineConfig
object is removed from the audit log response, and prevents user secrets from being logged. As a result, secrets and credentials do not appear in audit logs. (OCPBUGS-52466) - Before this update, testing Operator conditions using synthesized methods instead of deploying and scheduling pods by using the deployment controller caused incorrect test results. As a consequence, users experienced test failures due to the incorrect use of synthesized conditions instead of real pod creation. With this release, the Kubernetes deployment controller is used for testing Operator conditions, and improves pod deployment reliability. (OCPBUGS-43777)
1.6.13. Machine Config Operator Link kopierenLink in die Zwischenablage kopiert!
- Before this update, an external actor could uncordon a node that the Machine Config Operator (MCO) was draining. As a consequence, the MCO and the scheduler would schedule and unschedule pods at the same time, prolonging the drain process. With this release, the MCO attempts to recordon the node if an external actor uncordons it during the drain process. As a result, the MCO and scheduler no longer schedule and remove pods at the same time. (OCPBUGS-61516)
-
Before this update, during an update from OpenShift Container Platform 4.18.21 to OpenShift Container Platform 4.19.6, the Machine Config Operator (MCO) failed due to multiple labels in the
capacity.cluster-autoscaler.kubernetes.io/labels
annotation in one or more machine sets. With this release, the MCO now accepts multiple labels in thecapacity.cluster-autoscaler.kubernetes.io/labels
annotation and no longer fails during the update to OpenShift Container Platform 4.19.6. (OCPBUGS-60119) - Before this update, the Machine Config Operator (MCO) certificate management failed during an Azure Red Hat OpenShift (ARO) upgrade to 4.19 due to missing infrastructure status fields. As a consequence, certificates were refreshed without required Storage Area Network (SAN) IPs, causing connectivity issues for upgraded ARO clusters. With this release, the MCO now adds and retains SAN IPs during certificate management in ARO, preventing immediate rotation on upgrade to 4.19. (OCPBUGS-59780)
-
Before this update, when updating from a version of OpenShift Container Platform prior to 4.15, the
MachineConfigNode
Custom Resource Definitions (CRDs)feature was installed as Technology Preview (TP) causing the update to fail. This feature was fully introduced in OpenShift Container Platform 4.16. With this release, the update no longer deploys the Technology Preview CRDs, ensuring a successful upgrade. (OCPBUGS-59723) - Before this update, the Machine Config Operator (MCO) was updating node boot images without checking whether the current boot image was from Google Cloud or Amazon Web Services (AWS) Marketplace. As a consequence, the MCO would override a marketplace boot image with a standard OpenShift Container Platform image. With this release, for AWS images, the MCO has a lookup table that has all of the standard OpenShift Container Platform installer Advanced Metering Infrastructures (AMIs), which it references before updating the boot image. For Google Cloud images, the MCO checks the URL header before updating the boot image. As a result, the MCO no longer updates machine sets that have a marketplace boot image. (OCPBUGS-57426)
-
Before this update, OpenShift Container Platform updates that shipped a change to Core DNS templates would restart the
coredns
pod before the image pull for the updated base operating system (OS) image. As a consequence, a race occurred when the operating system update manager failed failed the image pull because of network errors, causing the update to stall. With this release, a retry update operation is added to the Machine Config Operator (MCO) to work around this race condition. OCPBUGS-43406
1.6.14. Management Console Link kopierenLink in die Zwischenablage kopiert!
- Before this update, the YAML editor in the web console would default to indenting YAML files with 4 spaces. With this release, the default indentation has changed to 2 spaces to align with recommendations. (OCPBUGS-61990)
- Before this update, expanding the terminal in the web console caused the session to close because the OpenShift Container Platform logo and header overlapped the terminal view. With this release, the terminal layout is fixed so that it expands correctly. As a result, you can expand or collapse the terminal without connection loss or input interruption. (OCPBUGS-61819)
-
Before this update, visiting the
/auth/error
page without the required state cookie showed a blank page and prevented error details from displaying. With this release, error handling is improved in the front-end code. As a result, the/auth/error
page displays error content, making it easier to diagnose and resolve problems. (OCPBUGS-60912) - Before this update, the order of items in the PersistentVolumeClaim action menu was not defined, causing the Delete PersistentVolumeClaim option to display in the middle of the list. With this release, the option is reordered so it now displays last in the menu. As a result, the action list is consistent and easier to navigate. (OCPBUGS-60756)
-
Before this update, clicking Download log on the Build logs page added
undefined
to the downloaded file name, and clicking Raw logs did not open the raw log in a new tab. With this release, the file name is corrected ensuring that clicking Raw logs opens the raw log as expected. (OCPBUGS-60753) - Before this update, entering a wrong value in an OpenShift console form field caused multiple exclamation icons to display. With this release, only one icon displays when a field value is invalid. As a result, error messages in all fields now display clearly. (OCPBUGS-60428)
- Before this update, some entries on the Quick Starts page displayed duplicate link buttons. With this release, the duplicates are removed, and links now display as intended, resulting in a cleaner and clearer page layout. (OCPBUGS-60373)
-
Before this update, the console included an outdated security instruction
X-XSS-Protection
when sending pages to your browser. With this release, the instruction is removed. As a result, the console runs securely in modern browsers. (OCPBUGS-60130) - Before this update, the error message in the events page would erroneously show the placeholder "{ error }" instead of an error message. With this release, the error message is shown. (OCPBUGS-60010)
-
Before this update, the console displayed the Registry poll interval drop-down menu for managed
CatalogSource
objects, but any change you made was automatically reverted. With this release, the drop-down menu is hidden for managed sources. As a result, the console no longer shows a menu option that cannot be applied. (OCPBUGS-59725) - Before this update, selecting the Resource menu on the Deploy from image page caused the view to jump to the top due to improper focus handling. With this release, the focus behavior is corrected so the page stays in place when you open the menu. As a result, your scroll position is preserved during selection. (OCPBUGS-59586)
- Before this update, the Get started message occupied too much space when you did not have a project, preventing the No resources found message from fully displaying. This update reduces the space used by the Get started message. As a result, all messages now display completely on the page. (OCPBUGS-59483)
-
Before this update, improperly nested
flags
withinproperties
inconsole-crontab-plugin.json
caused the plugin to break. With this release, the nesting in the JSON file is fixed, resolving the conflict with OCPBUGS-58858. As a result, the plugin now loads and displays theCronTabs
correctly. (OCPBUGS-59418) -
Before this update, starting a job from the console always reset its
backoffLimit
to 6, overriding your configured value. With this release, the configuredbackoffLimit
is preserved when you start a job in the console. As a result, jobs behave consistently between the console and the CLI. (OCPBUGS-59382) - Before this update, the YAML editor component did not handle some edge cases where the content could not be parsed into a JavaScript object, which caused errors in some situations. With this release, the component was updated to handle these edge cases reliably and the errors no longer occur. (OCPBUGS-59196)
- Before this update, the Namespace column displayed on the MachineSets list page even when you viewed a single project, because the code did not correctly scope the columns. With this release, the column logic is fixed. As a result, the MachineSets list no longer shows the Namespace column for project-scoped views. (OCPBUGS-58334)
-
Before this update, navigating to a storage class page with multiple path elements in the
href
displayed a blank tab. With this release, the plugin is fixed so that the tab content displays correctly after switching. As a result, storage class pages no longer show blank tabs. (OCPBUGS-58258) -
Before this update, editing a
HorizontalPodAutoscaler
(HPA) with aContainerResource
type caused a runtime error because the code did not define thee.resource
variable. With this release, thee.resource
is defined and the runtime error is fixed in the form editor. As a result, editing an HPA with theContainerResource
type no longer fails. (OCPBUGS-58208) -
Before this update, the
TELEMETER_CLIENT_DISABLED
setting in theConsoleConfig
ConfigMap caused gaps in the telemetry, which limited troubleshooting. With this release, the telemetry client is temporarily disable to resolve "Too Many Requests" errors. As a result, telemetry data is collected reliably, removing limits on troubleshooting. (OCPBUGS-58094) -
Before this update, clicking Configure in
AlertmanagerReceiversNotConfigured
failed with the errornavigate is not a function
because the code did not handle the configuration correctly. With this release, the issue is fixed. As a result,AlertmanagerReceiversNotConfigured
now opens as expected. (OCPBUGS-56986) -
Before this update, the CronTab list page returned an error when a
CronTab
resource was missing optional entries in itsspec
because the console did not validate them properly. with this release, the necessary validation is added. As a result, the CronTab list page loads correctly even when somespec
fields are not defined. (OCPBUGS-56830) - Before this update, users without a project saw only part of the Roles list because of insufficient role-based access control (RBAC) permissions. With this release, the access logic is fixed. As a result, these users can no longer open the Roles page, keeping sensitive data secure. (OCPBUGS-56707)
- Before this release, when there were no Quick Starts in the Quick Starts page, a plain text message was shown. With this release, cluster administrators are given actions to add or manage Quick Starts. (OCPBUGS-56629))
-
Before this update, the generated console dynamic plugin API documentation used the wrong
k8s
utility function names, such ask8sGetResource
instead ofk8sGet
. With this update, the documentation uses the correct function names with their export name aliases. As a result, the API documentation is clearer for console dynamic plugin developers working withk8s
utility functions. (OCPBUGS-56248) - Before this update, unused code in the deployment and deployment configuration menus caused unnecessary menu items to display. With this release, the unused menu item definitions are removed, improving code maintainability and reducing potential issues in future updates. (OCPBUGS-56245)
-
Before this update, the
/metrics
endpoint was not correctly parsing a bearer token from the authorization header on internal Prometheus scrape requests, which causedTokenReviews
to fail and all of these requests to be denied with a 401 response. This triggered aTargetDown
alert for the console metrics endpoint. With this release, the metrics endpoint handler was updated to correctly parse a bearer token from the authorization header forTokenReview
. This made theTokenReview
step behave as expected, and resolved theTargetDown
alert. (OCPBUGS-56148) -
Before this update, creating a node without a disk triggered a JavaScript
TypeError
when you accessed nodes in the console. With this release, the filter property initializes correctly. As a result, the node list displays without errors. (OCPBUGS-56050) -
Before this update, the
VirtualizedTable
hid theStarted
column on smaller screens, which broke default sorting and disrupted thePipelineRun
list. With this release, the default sorted column adjusts based on screen size, preventing the table from breaking. As a result, thePipelineRun
list page remains stable and displays correctly on smaller screens. (OCPBUGS-56044) - Before this update, the cluster switcher allowed users to access Red Hat Advanced Cluster Management (RHACM) by choosing the All Clusters option. With this release, the RHACM is accessed from the perspective selector by choosing the Fleet Management perspective. (OCPBUGS-55946)
- Before this update, the web console displayed an outdated message about a 60-day update limit in versions 4.16 and later, even though the limit was removed. With this update, the outdated message is removed. As a result, the web console shows only current updated information. (OCPBUGS-55919)
-
Before this update, the web console home page showed the wrong icon for
Info
alerts, which caused a mismatch in alert severity. With this release, the severity icons are fixed so they match correctly. As a result, the console shows alert severity clearly. (OCPBUGS-55806) -
Before this update, a dependency issue prevented the Console Operator from including the required
FeatureGate
resource for Cloud Service Provider (CSP) APIs. With this release, the missingFeatureGate
resource is added to theopenshift/api
dependency. As a result, CSP APIs now work as expected in the console. (OCPBUGS-55698) - Before this update, clicking the accordian in the Critical alerts section of the notification drawer did nothing, so the section stayed expanded. With this release, the accordian is fixed. As a result, you can now collapse the section when critical alerts are present. (OCPBUGS-55633)
- Before this update, additional HTTP client configurations increased the plugin initial loading time, which slowed overall OpenShift Container Platform performance. With this update, the client configuration is fixed, reducing plugin load time and improving page load speed. (OCPBUGS-55514)
- Before this update, the custom masthead logo replaced the default OpenShift logo in all themes, even when the light theme was set to use the default. With this release, the correct behavior is restored so the default OpenShift logo displays in the light theme when no custom logo is set. As a result, logos now display correctly in both light and dark themes, improving visual consistency. (OCPBUGS-55208)
-
Before this update, changing or removing a custom logo in the Console Operator configuration left outdated
ConfigMaps
in theopenshift-console
namespace due to delayed synchronization. With this release, the console operator removes these outdatedConfigMaps
when the custom logo configuration changes. As a result,ConfigMaps
in theopenshift-console
namespace remain accurate and up-to-date. (OCPBUGS-54780) - Before this update, the Raw logs page decoded Chinese log messages incorrectly, making them unreadable. With this release, the decoding is corrected. As a result, the page now displays Chinese log messages correctly. (OCPBUGS-52165)
- Before this update, opening a modal on a Networking page caused some web console plugin panels, such as the OpenShift Lightspeed UI or the Troubleshooting panel, to disappear. With this release, the conflict is resolved between networking modals and web console plugins. As a result, modals on the Networking pages no longer hide other console panels. (OCPBUGS-49709)
-
Before this update, the console server did not handle Content Security Policy (CSP) directives correctly when run locally with JSON input because it did not support the
MultiValue
type. With this release, the console accepts CSP directives asMultiValue
instead of JSON for local use. As a result, you can now pass separate CSP directives more easily during console development. (OCPBUGS-49291) - Before this update, importing multiple files in the YAML editor copied the existing content and appended the new file, creating duplicates. With this release, the import behavior is fixed. As a result, the YAML editor displays only the new file content without duplication. (OCPBUGS-45297)
-
Before this update, only one plugin using the
CreateProjectModal
extension could display its modal, causing conflicts when multiple plugins used the same extension point. As a result, there was no way to control which plugin extension was rendered. With this release, the plugin extensions resolve in the same order as their definitions in the cluster console Operator configuration. As a result, administrators can control whichCreateProjectModal
extension displays in the console by reordering the list. (OCPBUGS-43792) -
Before this update, the console did not display the header defined by the
ResourceYAMLEditor
property, so the YAML view opened without it. With this release, the property is fixed. As a result, headers such as Simple pod now display correctly. (OCPBUGS-32157)
1.6.15. Monitoring Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the
KubeNodeNotReady
andKubeNodeReadinessFlapping
alerts did not filter out cordoned nodes. As a consequence, users received alerts for nodes under maintenance, resulting in false positives. With this release, cordoned nodes are filtered from alerts. As a result, the number of false positives during maintenance is reduced. OCPBUGS-60692 -
Before this update, the
KubeAggregatedAPIErrors
alert was based on the sum of errors across all instances of an API. As a consequence, users were more likely to get alerted as the number of instances grew. With this release, alerts are evaluated at the instance level, rather than the API level. As a result, this reduces the number of false alarms due to the API error threshold getting hit sooner due to being evaluated cluster-wise, rather than instance-wise. OCPBUGS-60691 -
Before this update, the
KubeStatefulSetReplicasMismatch
alert did not fire when theStatefulSet
controller failed to create pods. As a consequence, users were not notified when theStatefulSet
did not reach the desired number of replicas. With this release, the alert now fires correctly when the controller cannot create pods. As a result, users are alerted whenever theStatefulSet
replicas do not match the configured amount. OCPBUGS-60689 - Before this update, the Cluster Monitoring Operator logged warnings about insecure Transport Layer Security (TLS) ciphers, which could raise concerns about security. With this release, the secure TLS settings are configured, removing the cipher warnings from the logs and ensuring the Operator reports correct, secure TLS configurations. OCPBUGS-58475
- Before this update, the monitoring dashboard in the OpenShift Container Platform web console sometimes displayed large negative CPU utilization values due to incorrect assumptions about intermediate results. As a consequence, users could see negative CPU utilization in the web console. With this release, CPU utilization values are properly calculated and the web console no longer shows negative utilization values. OCPBUGS-57481
-
Before this update, when a new secret was created or updated in any namespace,
Alertmanager
was reconciling even if that secret was not referenced in theAlertmanagerConfig
resource. As a consequence, the Prometheus Operator generated excessive API calls, causing increased CPU usage on control plane nodes. With this release,Alertmanager
only reconciles secrets that theAlertmanagerConfig
resource explicitly references. (OCPBUGS-56158) Before this update, Metrics Server logged the following warning even though functionality was not affected:
setting componentGlobalsRegistry in SetFallback. We recommend calling componentGlobalsRegistry.Set() right after parsing flags to avoid using feature gates before their final values are set by the flags.
setting componentGlobalsRegistry in SetFallback. We recommend calling componentGlobalsRegistry.Set() right after parsing flags to avoid using feature gates before their final values are set by the flags.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With this release, the warning message no longer appears in the
metrics-server
logs. OCPBUGS-41851-
Before this update, the
KubeCPUOvercommit
alert would not trigger on multi-node clusters even after CPU-consuming spikes over the permitted limits. With this release, the alert expression is adjusted to correctly account for multi-node clusters. As a result, theKubeCPUOvercommit
alert triggers correctly after such instances. OCPBUGS-35095 -
Before this update, users could set
prometheus
,prometheus_replica
, orcluster
as Prometheus external labels to thecluster-monitoring-config
anduser-workload-monitoring-config
config maps. This was not recommended and could cause issues with the cluster. With this release, the config maps no longer accept these reserved external labels. OCPBUGS-18282
1.6.16. Networking Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, an
NMState
service failure occurred in OpenShift Container Platform deployments because of aNetworkManager-wait-online
dependency issue in baremetal and multiple network interface controller (NIC) environments. As a consequence, an incorrect network configuration caused deployment failures. With this release, theNetworkManager-wait-online
dependency for baremetal deployments is updated, which reduces deployment failures and ensuresNMState
service stability. (OCPBUGS-61824) -
Before this release, the event data was not immediately available when the
cloud-event-proxy
container or pod rebooted. This caused thegetCurrenState
function to incorrectly return aclockclass
of0
. With this release, thegetCurrentState
function no longer returns an incorrectclockclass
and instead returns an HTTP400 Bad Request
or404 Not Found Error
. (OCPBUGS-59969) -
Before this update, the
HorizontalPodAutoscaler
object temporarily scaled theistiod-openshift-gateway
deployment to two replicas. This caused a Continuous Integration (CI) failure because the tests expected one replica. With this release, theHorizontalPodAutoscaler
object scaling verifies that theistiod-openshift-gateway
resource has at least one replica to continue deployment. (OCPBUGS-59894) -
Previously, the DNS Operator did not set the
readOnlyRootFilesystem
parameter totrue
in its configuration or for the configuration of its operands. As a result, the DNS Operator and its operands hadwrite
access to root file systems. With this release, the DNS Operator now sets thereadOnlyRootFilesystem
parameter totrue
, so that the DNS Operator and its operands now haveread-only
access to root file systems. This update provides enhanced security for your cluster. (OCPBUGS-59781) -
Before this update, when the Gateway API feature was enabled, it installed an Istio control plane configured with one pod replica and an associated
PodDisruptionBudget
setting. ThePodDisruptionBudget
setting prevented the only pod replica from being evicted, blocking cluster upgrades. With this release, the Ingress Operator prevents the Istio control plane from being configured with thePodDisruptionBudget
setting. Cluster upgrades are no longer blocked by the pod replica. (OCPBUGS-58358) -
Before this update, the Cluster Network Operator (CNO) stopped during a cluster upgrade when the
whereabouts-shim
network attachment was enabled. This issue occurred because of a missingrelease.openshift.io/version
annotation in theopenshift-multus
namespace. With this release, the missing annotation is now added to the cluster, so that the CNO no longer stops during a cluster upgrade when thewhereabouts-shim
attached is enabled. The cluster upgrade can now continue as expected. (OCPBUGS-57643) -
Before this update, the Ingress Operator added resources, most noteably gateway resources, to the
status.relatedObjects
parameter of the Cluster Operator even if the CRDs for those resources did not exist. Additionally, the Ingress Operator specified a namespace for theistios
andGatewayClass`resources, which are both cluster-scoped resources. As a result of these configurations, the `relatedObjects
parameter contained misleading information. With this release, an update to the status controller of the Ingress Operator ensures that the controller checks if these resources already exist and also checks the related feature gates before adding any of these resources to therelatedObjects
parameter . The controller no longer specifies namespaces for theGatewayClass
andistio
resources. This update ensures that therelatedObjects
parameter contains accurate information for theGatewayClass
andistio
resources. (OCPBUGS-57433) - Before this update, a cluster upgrade caused inconsistent egress IP address allocation due to stale Network Address Translation (NAT) handling. This issue occurred only when you deleted an egress IP pod while the OVN-Kubernetes controller for an egress node was down. As a consequence, duplicate Logical Router Policies and egress IP address usage occurred, which caused inconsistent traffic flow and outage. With this release, egress IP address allocation cleanup ensures consistent and reliable egress IP address allocation in OpenShift Container Platform 4.20 clusters. (OCPBUGS-57179)
-
Previously, when on-premise installer-provisioned infrastructure (IPI) deployments used the Cilium container network interface (CNI), the firewall rule that redirected traffic to the load balancer was ineffective. With this release, the rule works with the Cilium CNI and
OVNKubernetes
. (OCPBUGS-57065) -
Before this update, one of the
keepalived
health check scripts was failing due to missing permissions. This could cause the ingress VIP to be misplaced when shared ingress services were in use. With this release, the necessary permission was added back to the container so the health check now works correctly. (OCPBUGS-55681) -
Before this update, stale IP addresses existed in the
address_set
list of the corresponding DNS rule for theEgressFirewall
CRD. Instead of being removed, these stale addresses continued to get added to theaddress_set
, causing memory leak issues. With this release, when the time-to-live (TTL) expiration for an IP address is reached, the IP address gets removed from theaddress_set
list after a 5-second grace period has been reached. (OCPBUGS-38735) -
Before this update, certain traffic patterns with large packets running between OpenShift Container Platform nodes and pods triggered an OpenShift Container Platform host to send Internet Control Message Protocol (ICMP) needs frag to another OpenShift Container Platform host. This situation lowered the viable maximum transmission unit (MTU) in the cluster. As a consequence, executing the
ip route show cache
command displayed a cached route with a lower MTU than the physical link. Packets were dropped and OpenShift Container Platform components were degrading because the host did not send pod-to-pod traffic with the large packets. With this release, thenftables
rules prevent the OpenShift Container Platform nodes from lowering their MTU in response to these traffic patterns. (OCPBUGS-37733) -
Before this update, you could not override the node IP address selection process for deployments that ran on installer-provisioned infrastructure. This limitation impacted user-managed load balancers that did not use VIP addresses on a machine network, and this caused problems in environments that had multiple IP addresses. With this release, deployments that run on installer-provisioned infrastructure now support the
NODEIP_HINT ` parameter for the `nodeip-configuration
systemd service. This support update ensures that the correct node IP address is used, even when the VIP addresses are not on the same subnet. (OCPBUGS-36859)
1.6.17. Node Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, in certain configurations, the kubelet’s
podresources
API might have reported memory that was assigned to both active and terminated pods, instead of reporting memory assigned to only active pods. As a consequence, this inaccurate reporting might have affected workload placement by the NUMA-aware scheduler. With this release, kubelet’spodresources
no longer reports resources for terminated pods, which results in accurate workload placement by the NUMA-aware scheduler. (OCPBUGS-56785) -
Before this release, the Container Runtime Interface-OpenShift (CRI-O) system failed to recognize the terminated state of a stateful set pod when the backend storage went down, causing the pod to remain in a
Terminating
state due to an inability to detect that the container process no longer existed. This caused resource inefficiency and potential service disruption. With this release, the CRI-O now correctly recognizes terminated pods, improving StatefulSet termination flow. (OCPBUGS-55485) -
Before this update, if a CPU-pinned container within a Guaranteed QoS pod has cgroups quota defined, rounding and small delays in kernel CPU time accounting could cause throttling of the CPU-pinned process, even if the quota is set to allow 100% consumption for each allocated CPU. With this release, when
cpu-manager-policy=static
and the qualifications for static CPU assignment are satisfied, that is containers have Guaranteed QOS with integer CPU requests, the CFS quota is disabled. (OCPBUGS-14051)
1.6.18. Node Tuning Operator (NTO) Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the
iommu.passthrough=1
kernel argument caused an NVIDIA GPU validator failure on Advanced RISC Machine (ARM) CPUs in OpenShift Container Platform 4.18. With this release, the kernel argument is removed from the defaultTuned
CR for ARM-based environments. (OCPBUGS-52853)
1.6.19. Observability Link kopierenLink in die Zwischenablage kopiert!
- Before this update, the linked URL is in the developer perspective, but the perspective is not switched when you clicked the link. As a consequence, a blank page displayed. With this releae, the perspective changes when you click the link and the page displays correctly. (OCPBUGS-59215)
- Before this update, the Troubleshooting panel only worked in the admin perspective even though you could open the panel in all perspectives. As a consequence, when opening the panel in another perspective, the panel was non-operational. With this release, the Troubleshooting panel can only be opened from the admin perspective. (OCPBUGS-58166)
1.6.20. oc-mirror Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the incorrect count of mirrored Helm images in
oc-mirror
caused a failure to note all mirrored Helm images. As a consequence, an incorrect Helm image count was displayed. With this release, the incorrect Helm image count inoc-mirror
is fixed, and correctly mirrors all Helm images. As a result, the total mirrored images count for Helm charts inoc-mirror
is accurate. (OCPBUGS-59949) -
Before this update, the
--parallel-images
flag accepted invalid input, with a minimum value that was less than 1 or greater than the total number of images. As a consequence, parallel image copy failed with 0 or 100--parallel-images
flag, and limited the number of images that could be mirrored. With this release, the issue with invalid--parallel-images
flags is fixed, and values between 1 and the total number of images are accepted. As a result, users can set the--parallel-images
flag for any value in the valid range. (OCPBUGS-58467) -
Before this update, high
oc-mirror v2
concurrency defaults caused registry overload and led to request rejections. As a consequence, high concurrency defaults caused registry rejections, and failed container image pushes failed. With this release, concurrency defaults foroc-mirror v2
are reduced to avoid registry rejections, and the image push success rate is improved. (OCPBUGS-57370) -
Before this update, a bug occurred due to a mismatch between image digests and blocked image tags in the
ImageSetConfig
parameter. This bug caused users to see images from various cloud providers in a mirrored set, although they were blocked. With this release, theImageSetConfig
parameter is updated to support regular expression in theblockedImages
list for more flexible image exclusion, and allows the exclusion of images that match a regular expression pattern in theblockedImages
list. (OCPBUGS-56117) -
Before this update, the system umask value was set to
0077
for Security Technical Implementation Guide (STIG) compliance, and caused thedisk2mirror
parameter to stop uploading OpenShift Container Platform release images. As a consequence, users could not upload OpenShift Container Platform release images due to the umask command restriction. With this release,oc-mirror
handles the faulty umask value and alerts the user. The OpenShift Container Platform release images are uploaded correctly when the system umask is set to0077
. (OCPBUGS-55374) -
Before this update, an invalid Helm chart was incorrectly included in an Internet Systems Consortium (ISC) guideline, and caused an error message while running the
m2d`workflow. With this release, the error message for invalid Helm charts in `m2d
workflows is updated, and error message clarity is improved. (OCPBUGS-54473) - Before this update, multiple release collections occurred due to duplicate channel selection. As a consequence, duplicate release images were collected, and caused unnecessary storage usage. With this release, duplicate release collection is fixed, and each release is collected once. As a result, the duplicate release collection is eliminated, and ensures efficient storage with faster access. (OCPBUGS-52562)
-
Before this update,
oc-mirror
did not check the availability of the specific OpenShift Container Platform version, and caused it to continue with non-existent versions. As a consequence, users assumed that the mirroring was successful because no error messages were received. With this release,oc-mirror
returns an error when a non-existent OpenShift Container Platform version is specified, in addition to a reason for the issue. As a result, users are aware of unavailable versions and can take appropriate action. (OCPBUGS-51157)
1.6.21. OpenShift API Server Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, on a cluster upgraded from OpenShift Container Platform 4.16, or earlier, there might be previously generated image pull secrets that cannot be deleted due to the presence of the
openshift.io/legacy-token
finalizer, if the internal Image Registry was removed. With this release, the issue no longer occurs. (OCPBUGS-52193) -
Before this update, deleting an
istag
resource with the--dry-run=server
option unintentionally caused actual deletion of the image from the server. This unexpected deletion occurred due to thedry-run
option being implemented incorrectly in theoc delete istag
command. With this release, thedry-run
option is wired to the 'oc delete istag' command. As a result, the accidental deletion of image objects is prevented and theistag
object remains intact when using the--dry-run=server
option. (OCPBUGS-35855)
1.6.22. OpenShift CLI (oc) Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the
oc adm node-image create
command failed to create an International Organization for Standardization (ISO) image if the target cluster did not have a debug SSH key store in the99-worker-ssh
config map, which is not a requirement for generating an image. With this release, the ISO image can successfully be created without this key store in the99-worker-ssh
config map. (OCPBUGS-60600) -
Before this update, a panic occurred during
oc describe templateinstance
due to nil pointer dereference inTemplateInstanceDescriber
. With this release, nil pointer dereference in theoc describe templateinstance
command was fixed by checking for nil secret before describing parameters. (OCPBUGS-60281) -
Before this update, the
oc login -u
command in external OIDC environments succeeded but removed user credentials, causing subsequentoc
commands to fail. With this release, theoc login -u
command no longer modifies kubeconfig, preventing subsequentoc
commands from failing. As a result, the fix preventsoc login -u
from removing user credentials, ensuring subsequent "oc" commands work correctly. (OCPBUGS-58393) - Before this update, when using the`oc adm node-image create` command, the command would not provide descriptive error messages after failures. With this release, the command provides error messages when it fails. (OCPBUGS-55048)
-
Before this update, the must-gather pod could be scheduled on a node marked with a
NotReady
taint, resulting in deployment to an unavailable node and subsequent log collection failures. With this release, the scheduler now accounts for node taints and automatically applies a node selector to the pod specification. This change ensures that must-gather pods are not scheduled on tainted nodes, thereby preventing log collection failures. (OCPBUGS-50992) -
Before this update, when using the
oc adm node-image create
command to add nodes to a cluster, the command erroneously modified the existing permissions of the target assets folder when saving the ISO on disk. With this release, the fix ensures that the copying operation will preserve the destination folder permissions. (OCPBUGS-49897)
1.6.23. OpenShift Controller Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the build controller looked for secrets that were linked for general use, not specifically for the image pull. With this release, when searching for default image pull secrets, the builds use
ImagePullSecrets
that are linked to the service account. (OCPBUGS-57918) - Before this update, incorrectly formatted proxy environment variables in Build pod led to build failures due to external binary format complaints. With this release, builds no longer fail due to incorrectly formatted proxy environment variables as they are now excluded. (OCPBUGS-54695)
1.6.24. Operator Lifecycle Manager (OLM) Classic Link kopierenLink in die Zwischenablage kopiert!
- Before this update, bundle unpack jobs did not inherit control plane tolerances for the catalog Operator when they were created. As a result, bundle unpack jobs ran on worker nodes only. If no worker nodes were available due to taints, cluster administrators could not install or update Operators on the cluster. With this release, OLM (Classic) adopts control plane tolerations for bundle unpack jobs and the jobs can run as part of the control plane. (OCPBUGS-58349)
- Before this update, when an Operator supplied more than one API in an Operator group namespace, OLM (Classic) made unnecessary update calls to the cluster roles that were created for the Operator group. As a result, these unnecessary calls caused churn for ectd and the API server. With this update, OLM (Classic) does not make unnecessary update calls to the cluster role objects in Operator groups. (OCPBUGS-57222)
-
Before this update, if the
olm-operator
pod crashed during cluster updates due to mislabeled resources, the notification message used theinfo
label. With this update, crash notification messages due to mislabeled resources use theerror
label instead. (OCPBUGS-53161) - Before this update, the catalog Operator scheduled catalog snapshots for every 5 minutes. On clusters with many namespaces and subscriptions, snapshots failed and cascaded across catalog sources. As a result, the spikes in CPU loads effectively blocked installing and updating Operators. With this update, catalog snapshots are scheduled for every 30 minutes to allow enough time for the snapshotes to resolve. (OCPBUGS-43966)
1.6.25. Service Catalog Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, setting an invalid certificate secret name in the service annotation
service.beta.openshift.io/serving-cert-secret-name
would cause the service Certificate Authority (CA) Operator to hot loop. With this release, the Operator stops retrying to create the secret after 10 tries. The number of retries cannot be changed. (OCPBUGS-61966)
1.6.26. Storage Link kopierenLink in die Zwischenablage kopiert!
- Before this update, resizing or cloning small Google Cloud Hyperdisk volumes (e.g., from 4Gi to 5Gi) would fail due to an Input/Output Operations Per Second (IOPS) validation error from the Google Cloud API. This occurred because the Container Storage Interface (CSI) driver did not automatically adjust the provisioned IOPS to meet the minimum requirements of the new volume size. With this release, the driver has been updated to correctly calculate and provide the required IOPS during volume expansion operations. Users can now successfully resize and clone these smaller Hyperdisk volumes. (OCPBUGS-62117)
- Before this update, a race condition would sometimes cause an intermittent failure, or flake, when a Persistent Volume Claim (PVC) was resized too quickly after being created. This resulted in an error where the system would incorrectly report that the bound Persistent Volume (PV) could not be found. With this release, the timing issue was fixed, so resizing a PVC right after its creation works. (OCPBUGS-61546)
1.7. Technology Preview features status Link kopierenLink in die Zwischenablage kopiert!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
Technology Preview Features Support Scope
In the following tables, features are marked with the following statuses:
- Not Available
- Technology Preview
- General Availability
- Deprecated
- Removed
1.7.1. Authentication and authorization Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Pod security admission restricted enforcement | Technology Preview | Technology Preview | Technology Preview |
Direct authentication with an external OIDC identity provider | Not Available | Technology Preview | Technology Preview |
1.7.2. Edge computing Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Accelerated provisioning of GitOps ZTP | Technology Preview | Technology Preview | Technology Preview |
Enabling disk encryption with TPM and PCR protection | Technology Preview | Technology Preview | Technology Preview |
Configuring a local arbiter node | Not Available | Technology Preview | General Availability |
Configuring a two-node OpenShift cluster with fencing | Not Available | Not Available | Technology Preview |
1.7.3. Extensions Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Operator Lifecycle Manager (OLM) v1 | General Availability | General Availability | General Availability |
OLM v1 runtime validation of container images using sigstore signatures | Technology Preview | Technology Preview | Technology Preview |
OLM v1 permissions preflight check for cluster extensions | Not Available | Technology Preview | Technology Preview |
OLM v1 deploying a cluster extension in a specified namespace | Not Available | Technology Preview | Technology Preview |
OLM v1 deploying a cluster extension that uses webhooks | Not Available | Not Available | Technology Preview |
1.7.4. Installation Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Adding kernel modules to nodes with kvc | Technology Preview | Technology Preview | Technology Preview |
Enabling NIC partitioning for SR-IOV devices | General Availability | General Availability | General Availability |
User-defined labels and tags for Google Cloud | General Availability | General Availability | General Availability |
Installing a cluster on Alibaba Cloud by using Assisted Installer | Technology Preview | Technology Preview | Technology Preview |
Installing a cluster on Microsoft Azure with confidential VMs | Technology Preview | General Availability | General Availability |
Dedicated disk for etcd on Microsoft Azure | Not Available | Not Available | Technology Preview |
Mount shared entitlements in BuildConfigs in RHEL | Technology Preview | Technology Preview | Technology Preview |
OpenShift zones support for vSphere host groups | Not Available | Technology Preview | Technology Preview |
Selectable Cluster Inventory | Technology Preview | Technology Preview | Technology Preview |
Installing a cluster on Google Cloud using the Cluster API implementation | General Availability | General Availability | General Availability |
Enabling a user-provisioned DNS on Google Cloud | Not Available | Technology Preview | Technology Preview |
Installing a cluster on VMware vSphere with multiple network interface controllers | Technology Preview | Technology Preview | General Availability |
Using bare metal as a service | Not Available | Technology Preview | Technology Preview |
Changing the CVO log level | Not Available | Not Available | Technology Preview |
1.7.5. Machine Config Operator Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Improved MCO state reporting ( | Technology Preview | Technology Preview | General Availability |
Image mode for OpenShift/On-cluster RHCOS image layering for AWS and Google Cloud | Technology Preview | General Availability | General Availability |
Image mode for OpenShift/On-cluster RHCOS image layering for vSphere | Not available | Not available | Technology Preview |
1.7.6. Machine management Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Managing machines with the Cluster API for Amazon Web Services | Technology Preview | Technology Preview | Technology Preview |
Managing machines with the Cluster API for Google Cloud | Technology Preview | Technology Preview | Technology Preview |
Managing machines with the Cluster API for IBM Power® Virtual Server | Technology Preview | Technology Preview | Technology Preview |
Managing machines with the Cluster API for Microsoft Azure | Technology Preview | Technology Preview | Technology Preview |
Managing machines with the Cluster API for RHOSP | Technology Preview | Technology Preview | Technology Preview |
Managing machines with the Cluster API for VMware vSphere | Technology Preview | Technology Preview | Technology Preview |
Managing machines with the Cluster API for bare metal | Not Available | Technology Preview | Technology Preview |
Cloud controller manager for IBM Power® Virtual Server | Technology Preview | Technology Preview | Technology Preview |
Adding multiple subnets to an existing VMware vSphere cluster by using compute machine sets | Technology Preview | Technology Preview | Technology Preview |
Configuring Trusted Launch for Microsoft Azure virtual machines by using machine sets | Technology Preview | General Availability | General Availability |
Configuring Azure confidential virtual machines by using machine sets | Technology Preview | General Availability | General Availability |
1.7.7. Monitoring Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Metrics Collection Profiles | Technology Preview | General Availability | General Availability |
1.7.8. Multi-Architecture Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
| Technology Preview | Technology Preview | General Availability |
| Technology Preview | Technology Preview | General Availability |
| Technology Preview | Technology Preview | General Availability |
Support for configuring the image stream import mode behavior | Technology Preview | Technology Preview | Technology Preview |
1.7.9. Networking Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
eBPF manager Operator | Technology Preview | Technology Preview | Technology Preview |
Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses | Technology Preview | Technology Preview | Technology Preview |
Updating the interface-specific safe sysctls list | Technology Preview | Technology Preview | Technology Preview |
Egress service custom resource | Technology Preview | Technology Preview | Technology Preview |
VRF specification in | Technology Preview | Technology Preview | Technology Preview |
VRF specification in | Technology Preview | General Availability | General Availability |
Host network settings for SR-IOV VFs | General Availability | General Availability | General Availability |
Integration of MetalLB and FRR-K8s | General Availability | General Availability | General Availability |
Automatic leap seconds handling for PTP grandmaster clocks | General Availability | General Availability | General Availability |
PTP events REST API v2 | General Availability | General Availability | General Availability |
OVN-Kubernetes customized | General Availability | General Availability | General Availability |
OVN-Kubernetes customized | Technology Preview | Technology Preview | Technology Preview |
Live migration to OVN-Kubernetes from OpenShift SDN | Not Available | Not Available | Not Available |
User-defined network segmentation | General Availability | General Availability | General Availability |
Dynamic configuration manager | Technology Preview | Technology Preview | Technology Preview |
SR-IOV Network Operator support for Intel C741 Emmitsburg Chipset | Technology Preview | Technology Preview | Technology Preview |
Gateway API and Istio for Ingress management | Technology Preview | General Availability | General Availability |
Dual-port NIC for PTP ordinary clock | Not Available | Technology Preview | Technology Preview |
DPU Operator | Not Available | Technology Preview | Technology Preview |
Fast IPAM for the Whereabouts IPAM CNI plugin | Not Available | Technology Preview | Technology Preview |
Unnumbered BGP peering | Not Available | Technology Preview | General Availability |
Load balancing across the aggregated bonded interface with xmitHashPolicy | Not Available | Not Available | Technology Preview |
PF Status Relay Operator for high availability with SR-IOV networks | Not Avaialable | Not Available | Technology Preview |
Preconfigured user-defined network end points using MTV | Not Available | Not Available | Technology Preview |
Unassisted holdover for PTP devices | Not Available | Not Available | Technology Preview |
1.7.10. Node Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
| Technology Preview | Technology Preview | Technology Preview |
sigstore support | Technology Preview | Technology Preview | General Availability |
Default sigstore | Technology Preview | Technology Preview | Technology Preview |
Linux user namespace support | Technology Preview | Technology Preview | General Availability |
Attribute-Based GPU Allocation | Not Available | Not Available | Technology Preview |
1.7.11. OpenShift CLI (oc) Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
oc-mirror plugin v2 | General Availability | General Availability | General Availability |
oc-mirror plugin v2 enclave support | General Availability | General Availability | General Availability |
oc-mirror plugin v2 delete functionality | General Availability | General Availability | General Availability |
1.7.12. Operator lifecycle and development Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Operator Lifecycle Manager (OLM) v1 | General Availability | General Availability | General Availability |
Scaffolding tools for Hybrid Helm-based Operator projects | Removed | Removed | Removed |
Scaffolding tools for Java-based Operator projects | Removed | Removed | Removed |
1.7.13. Red Hat OpenStack Platform (RHOSP) Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
RHOSP integration into the Cluster CAPI Operator | Technology Preview | Technology Preview | Technology Preview |
Control plane with | General Availability | General Availability | General Availability |
Hosted control planes on RHOSP 17.1 | Not Available | Technology Preview | Technology Preview |
1.7.14. Scalability and performance Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
factory-precaching-cli tool | Technology Preview | Technology Preview | Technology Preview |
Hyperthreading-aware CPU manager policy | Technology Preview | Technology Preview | Technology Preview |
Mount namespace encapsulation | Technology Preview | Technology Preview | Technology Preview |
Node Observability Operator | Technology Preview | Technology Preview | Technology Preview |
Increasing the etcd database size | Technology Preview | Technology Preview | Technology Preview |
Using RHACM | Technology Preview | General Availability | General Availability |
Pinned Image Sets | Technology Preview | Technology Preview | Technology Preview |
Configuring NUMA-aware scheduler replicas and high availability | Not available | Not available | Technology Preview |
1.7.15. Storage Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
AWS EFS One Zone volume | Not Available | Not Available | General Availability |
Automatic device discovery and provisioning with Local Storage Operator | Technology Preview | Technology Preview | Technology Preview |
Azure File CSI snapshot support | Technology Preview | Technology Preview | Technology Preview |
Azure File cross-subscription support | Not Available | General Availability | General Availability |
Azure Disk performance plus | Not Available | Not Available | General Availability |
Configuring fsGroupChangePolicy per namespace | Not Available | Not Available | General Availability |
Shared Resources CSI Driver in OpenShift Builds | Technology Preview | Technology Preview | Technology Preview |
Secrets Store CSI Driver Operator | General Availability | General Availability | General Availability |
CIFS/SMB CSI Driver Operator | General Availability | General Availability | General Availability |
VMware vSphere multiple vCenter support | General Availability | General Availability | General Availability |
Disabling/enabling storage on vSphere | Technology Preview | General Availability | General Availability |
Increasing max number of volumes per node for vSphere | Not Available | Technology Preview | Technology Preview |
RWX/RWO SELinux mount option | Developer Preview | Developer Preview | Technology Preview |
Migrating CNS Volumes Between Datastores | Developer Preview | General Availability | General Availability |
CSI volume group snapshots | Technology Preview | Technology Preview | Technology Preview |
GCP PD supports C3/N4 instance types and hyperdisk-balanced disks | General Availability | General Availability | General Availability |
OpenStack Manila support for CSI resize | General Availability | General Availability | General Availability |
Volume Attribute Classes | Not Available | Technology Preview | Technology Preview |
Volume populators | Technology Preview | Technology Preview | General Availability |
1.7.16. Web console Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
Feature | 4.18 | 4.19 | 4.20 |
---|---|---|---|
Red Hat OpenShift Lightspeed in the OpenShift Container Platform web console | Technology Preview | Technology Preview | Technology Preview |
1.8. Known issues Link kopierenLink in die Zwischenablage kopiert!
There is a known issue with Gateway API and Amazon Web Services (AWS), Google Cloud, and Microsoft Azure private clusters. The load balancer that is provisioned for a gateway is always configured to be external, which can cause errors or unexpected behavior:
-
In an AWS private cluster, the load balancer becomes stuck in the
pending
state and reports the error:Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
. - In Google Cloud and Azure private clusters, the load balancer is provisioned with an external IP address, when it should not have an external IP address.
There is no supported workaround for this issue. (OCPBUGS-57440)
-
In an AWS private cluster, the load balancer becomes stuck in the
When running a pod in an isolated user namespace, the UID/GID inside a pod container no longer matches the UID/GID on the host. For file system ownership to work correctly, the Linux kernel uses ID-mapped mounts, which translate user IDs between the container and the host at the virtual file system (VFS) layer.
However, not all file systems currently support ID-mapped mounts, such as Network File Systems (NFS) and other network or distributed file systems. Because such file systems do not support ID-mapped mounts, pods running within user namespaces can fail to access mounted NFS volumes. This behavior is not specific to OpenShift Container Platform. It applies to all Kubernetes distributions from Kubernetes v1.33 and later.
When upgrading to OpenShift Container Platform 4.20, clusters are unaffected until you opt in to user namespaces. After enabling user namespaces, any pod that is using an NFS-backed persistent volume from a vendor that does not support ID-mapped mounts might experience access or permission issues when running in a user namespace. For more information about enabling user namespaces, see Configuring Linux user namespace support.
NoteExisting OpenShift Container Platform 4.19 clusters are unaffected until you explicitly enable user namespaces, which is a Technology Preview feature in OpenShift Container Platform 4.19.
-
When installing a cluster on Azure, if you set any of the
compute.platform.azure.identity.type
,controlplane.platform.azure.identity.type
, orplatform.azure.defaultMachinePlatform.identity.type
field values toNone
, your cluster is unable to pull images from the Azure Container Registry. You can avoid this issue by providing a user-assigned identity or by leaving the identity field blank. In both cases, the installation program generates a user-assigned identity. (OCPBUGS-56008) -
There is a known issue in the unified software catalog view of the console. When you select Ecosystem
Software Catalog, you must enter an existing project name or create a new project to view the software catalog. The project selection field does not effect how catalog content is installed on the cluster. As a workaround, enter any existing project name to view the software catalog. (OCPBUGS-61870) - Starting with OCP 4.20, there is a decrease in the default maximum open files soft limit for containers. As a consequence, end users may experience application failures. To work around this problem, increase the container runtimes (CRI-O) ulimit configuration. (OCPBUGS-62095)
- Deleting and recreating test workloads with a BlueField-3 NIC causes clock jumps due to inconsistent PTP synchronization. This disrupts time synchronization in test workloads. The time synchronization stabilizes when the workloads are stable. (RHEL-93579)
- Event logs for GNR-D interfaces are ambiguous due to identical three-letter prefixes ("eno"). As a consequence, affected interfaces are not clearly identified during state changes. To work around this problem, change interfaces used by ptp-operator to follow the "path" naming convention, ensuring per clock events are identified correctly based on interface names and clearly indicate which clock is affected by state changes. For more information, see Network interface naming policies. (OCPBUGS-62817)
-
When you install a cluster on AWS, if you do not configure AWS credentials before running any
openshift-install create
command, the installation program fails. (OCPBUGS-56658)
-
On systems using specific AMD EPYC processors, some low-level system interrupts, for example
AMD-Vi
, might contain CPUs in the CPU mask that overlaps with CPU-pinned workloads. This behavior is because of the hardware design. These specific error-reporting interrupts are generally inactive and there is currently no known performance impact.(OCPBUGS-57787) -
Currently, pods that use a
guaranteed
QoS class and request whole CPUs might not restart automatically after a node reboot or kubelet restart. The issue might occur in nodes configured with a static CPU Manager policy and using thefull-pcpus-only
specification, and when most or all CPUs on the node are already allocated by such workloads. As a workaround, manually delete and re-create the affected pods. (OCPBUGS-43280) -
The Performance Profile Creator tool fails to analyze a
must-gather
archive if the archive contains a custom namespace directory that ends with the suffixnodes
. The failure occurs because of the tool’s search logic, which incorrectly reports an error for multiple matches. As a workaround, rename the custom namespace directory so that it does not end with thenodes
suffix, and run the tool again. (OCPBUGS-60218) - Currently, on clusters with SR-IOV network virtual functions configured, a race condition might occur between system services responsible for network device renaming and the TuneD service managed by the Node Tuning Operator. As a consequence, the TuneD profile might become degraded after the node restarts, leading to performance degradation. As a workaround, restart the TuneD pod to restore the profile state. (OCPBUGS-41934)
- The SuperMicro ARS-111GL-NHR server is unable to access virtual media during boot when the virtual media image is served through an IPv6 address. As a consequence, you cannot use virtual media on the SuperMicro ARS-111GL-NHR server model with an IPv6 network configuration. (OCPBUGS-60070)
- A known latency issue currently affects systems running on 4th Gen Intel Xeon processors. (OCPBUGS-46528)
1.9. Asynchronous errata updates Link kopierenLink in die Zwischenablage kopiert!
Security, bug fix, and enhancement updates for OpenShift Container Platform 4.20 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.20 errata is available on the Red Hat Customer Portal. See the OpenShift Container Platform Life Cycle for more information about asynchronous errata.
Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate.
This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.20. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.20.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow.
For any OpenShift Container Platform release, always review the instructions on updating your cluster properly.
1.9.1. RHSA-2025:9562 - OpenShift Container Platform 4.20.0 image release, bug fix, and security update advisory Link kopierenLink in die Zwischenablage kopiert!
Issued: 21 Oct 2025
OpenShift Container Platform release 4.20.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2025:9562 advisory. The RPM packages that are included in the update are provided by the RHEA-2025:4782 advisory.
Space precluded documenting all of the container images for this release in the advisory.
You can view the container images in this release by running the following command:
oc adm release info 4.20.0 --pullspecs
$ oc adm release info 4.20.0 --pullspecs
1.9.1.1. Updating Link kopierenLink in die Zwischenablage kopiert!
To update an OpenShift Container Platform 4.20 cluster to this latest release, see Updating a cluster using the CLI.