Chapter 1. OpenShift Container Platform 4.18 release notes
Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.
Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.
1.1. About this release
OpenShift Container Platform (RHSA-2024:6122) is now available. This release uses Kubernetes 1.31 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.18 are included in this topic.
OpenShift Container Platform 4.18 clusters are available at https://console.redhat.com/openshift. From the Red Hat Hybrid Cloud Console, you can deploy OpenShift Container Platform clusters to either on-premises or cloud environments.
OpenShift Container Platform 4.18 is supported on Red Hat Enterprise Linux (RHEL) 8.8 and a later version of RHEL 8 that is released before End of Life of OpenShift Container Platform 4.18. OpenShift Container Platform 4.18 is also supported on Red Hat Enterprise Linux CoreOS (RHCOS). To understand RHEL versions used by RHCOS, see RHEL Versions Utilized by Red Hat Enterprise Linux CoreOS (RHCOS) and OpenShift Container Platform (Knowledgebase article).
You must use RHCOS machines for the control plane, and you can use either RHCOS or RHEL for compute machines. RHEL machines are deprecated in OpenShift Container Platform 4.16 and will be removed in a future release.
Starting from OpenShift Container Platform 4.14, the Extended Update Support (EUS) phase for even-numbered releases increases the total available lifecycle to 24 months on all supported architectures, including x86_64
, 64-bit ARM (aarch64
), IBM Power® (ppc64le
), and IBM Z® (s390x
) architectures. Beyond this, Red Hat also offers a 12-month additional EUS add-on, denoted as Additional EUS Term 2, that extends the total available lifecycle from 24 months to 36 months. The Additional EUS Term 2 is available on all architecture variants of OpenShift Container Platform. For more information about support for all versions, see the Red Hat OpenShift Container Platform Life Cycle Policy.
Commencing with the OpenShift Container Platform 4.14 release, Red Hat is simplifying the administration and management of Red Hat shipped cluster Operators with the introduction of three new life cycle classifications; Platform Aligned, Platform Agnostic, and Rolling Stream. These life cycle classifications provide additional ease and transparency for cluster administrators to understand the life cycle policies of each Operator and form cluster maintenance and upgrade plans with predictable support boundaries. For more information, see OpenShift Operator Life Cycles.
OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64
, ppc64le
, and s390x
architectures.
For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards.
1.2. OpenShift Container Platform layered and dependent component support and compatibility
The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy.
1.3. New features and enhancements
This release adds improvements related to the following components and concepts:
1.3.1. Authentication and authorization
1.3.1.1. Rotating OIDC bound service account signer keys
With this release, you can use the Cloud Credential Operator (CCO) utility (ccoctl
) to rotate the OpenID Connect (OIDC) bound service account signer key for clusters installed on the following cloud providers:
1.3.2. Backup and restore
1.3.2.1. Hibernating a cluster for up to 90 days
With this release, you can now hibernate your OpenShift Container Platform cluster for up to 90 days and expect the cluster to recover successfully. Before this release, you could only hibernate for up to 30 days.
For more information, see Hibernating an OpenShift Container Platform cluster.
1.3.2.2. Enhanced etcd backup and restore documentation
The etcd disaster recovery documentation was updated and simplified for quicker recovery of the cluster, both in a normal disaster recovery situation and in cases where a full cluster restoration from a previous backup is not necessary.
Two scripts, quorum-restore.sh
and cluster-restore.sh
, are introduced to complete many of the steps in the recovery procedure.
In addition, a procedure was added to more quickly recover the cluster when at least one good node exists. If any of the surviving nodes meets specific criteria, you can use it to run the recovery.
For more information, see About disaster recovery.
1.3.3. Edge computing
1.3.3.1. Shutting down and restarting single-node OpenShift clusters up to 1 year after cluster installation
With this release, you can shut down and restart single-node OpenShift clusters up to 1 year after cluster installation. If certificates expired while the cluster was shut down, you must approve certificate signing requests (CSRs) upon restarting the cluster.
Before this update, you could shut down and restart single-node OpenShift clusters for only 120 days after cluster installation.
Evacuate all workload pods from the single-node OpenShift cluster before you shut it down.
For more information, see Shutting down the cluster gracefully.
1.3.4. Extensions (OLM v1)
1.3.4.1. Operator Lifecycle Manager (OLM) v1 (General Availability)
Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release and has helped enable and grow a substantial ecosystem of solutions and advanced workloads running as Operators.
OpenShift Container Platform 4.18 introduces OLM v1, the next-generation Operator Lifecycle Manager, as a General Availability (GA) feature, designed to improve how you manage Operators on OpenShift Container Platform.
With OLM v1 now generally available, starting in OpenShift Container Platform 4.18, the existing version of OLM that has been included since the launch of OpenShift Container Platform 4 is now known as OLM (Classic).
Previously available as a Technology Preview feature only, the updated framework in OLM v1 evolves many of the concepts that have been part of OLM (Classic) by simplifying Operator management, enhancing security, and boosting reliability.
- Starting in OpenShift Container Platform 4.18, OLM v1 is now enabled by default, alongside OLM (Classic). OLM v1 is a cluster capability that administrators can optionally disable before installation of OpenShift Container Platform.
- OLM (Classic) remains fully supported throughout the OpenShift Container Platform 4 lifecycle.
- Simplified API
OLM v1 simplifies Operator management with a new, user-friendly API: the
ClusterExtension
object. By managing Operators as integral extensions of the cluster, OLM v1 caters to the special lifecycle requirements of custom resource definition (CRDs). This design aligns more closely with Kubernetes principles, treating Operators, which consist of custom controllers and CRDs, as cluster-wide singletons.OpenShift Container Platform continues to give you access to the latest Operator packages, patches, and updates through default Red Hat Operator catalogs, which are enabled by default for OLM v1 in OpenShift Container Platform 4.18. With OLM v1, you can install an Operator package by creating and applying a
ClusterExtension
API object in your cluster. By interacting withClusterExtension
objects, you can manage the lifecycle of Operator packages, quickly understand their status, and troubleshoot issues.- Streamlined declarative workflows
- Leveraging the simplified API, you can define your desired Operator states in a declarative way and, when integrating with tools like Git and Zero Touch Provisioning, let OLM v1 automatically maintain those states. This minimizes human error and unlocks a wider range of use cases.
- Uninterrupted operations with continuous reconciliation and optional rollbacks
OLM v1 enhances reliability through continuous reconciliation. Rather than relying on single attempts, OLM v1 proactively addresses Operator installation and update failures, automatically retrying until the issue is resolved. This eliminates the manual steps previously required, such as deleting
InstallPlan
API objects, and greatly simplifies the resolution of off-cluster issues, such as missing container images or catalog problems.In addition, OLM v1 provides optional rollbacks, allowing you to revert Operator version updates under specific conditions after carefully assessing any potential risks.
- Granular update control for deployments
With granular update control, you can select a specific Operator version or define a range of acceptable versions. For example, if you have tested and approved version
1.2.3
of an Operator in a stage environment, instead of hoping the latest version works just as well in production, you can use version pinning. By specifying1.2.3
as the desired version, you can ensure that is the exact version that will be deployed for a safe and predictable update.Alternatively, automatic z-stream updates provide a seamless and secure experience by automatically applying security fixes without manual intervention, minimizing operational disruptions.
- Enhanced security with user-provided service accounts
-
OLM v1 prioritizes security by minimizing its permission requirements and providing greater control over access. By using user-provided
ServiceAccount
objects for Operator lifecycle operations, OLM v1 access is restricted to only the necessary permissions, significantly reducing the control plane attack surface and improving overall security. In this way, OLM v1 adopts a least-privilege model to minimize the impact of a compromise.
The documentation for OLM v1 exists as a stand-alone guide called Extensions. Previously, OLM v1 documentation was a subsection of the Operators guide, which otherwise documents the OLM (Classic) feature set.
The updated location and guide name reflect a more focused documentation experience and aims to differentiate between OLM v1 and OLM (Classic).
1.3.4.2. OLM v1 supported extensions
Currently, Operator Lifecycle Manager (OLM) v1 supports installing cluster extensions that meet all of the following criteria:
-
The extension must use the
registry+v1
bundle format introduced in OLM (Classic). -
The extension must support installation via the
AllNamespaces
install mode. - The extension must not use webhooks.
The extension must not declare dependencies by using any of the following file-based catalog properties:
-
olm.gvk.required
-
olm.package.required
-
olm.constraint
-
OLM v1 checks that the extension you want to install meets these constraints. If the extension that you want to install does not meet these constraints, an error message is printed in the cluster extension’s conditions.
1.3.4.3. Disconnected environment support in OLM v1
To support cluster administrators that prioritize high security by running their clusters in internet-disconnected environments, especially for mission-critical production workloads, OLM v1 supports these disconnected environments, starting in OpenShift Container Platform 4.18.
After using the oc-mirror plugin for the OpenShift CLI (oc
) to mirror the images required for your cluster to a mirror registry in your fully or partially disconnected environments, OLM v1 can function properly in these environments by utilizing the sets of resources generated by either oc-mirror plugin v1 or v2.
For more information, see Disconnected environment support in OLM v1.
1.3.4.4. Improved catalog selection in OLM v1
With this release, you can perform the following actions to control the selection of catalog content when you install or update a cluster extension:
- Specify labels to select the catalog
- Use match expressions to filter across catalogs
- Set catalog priority
For more information, see Catalog content resolution.
1.3.4.5. Basic support for proxied environments and trusted CA certificates
With this release, Operator Controller and catalogd can now run in proxied environments and include basic support for trusted CA certificates.
1.3.4.6. Compatibility with OpenShift Container Platform versions
Before cluster administrators can update their OpenShift Container Platform cluster to its next minor version, they must ensure that all installed Operators are updated to a bundle version that is compatible with the next minor version (4.y+1) of a cluster.
Starting in OpenShift Container Platform 4.18, OLM v1 supports the olm.maxOpenShiftVersion
annotation in the cluster service version (CSV) of an Operator, similar to the behavior in OLM (Classic), to prevent administrators from updating the cluster before updating the installed Operator to a compatible version.
For more information, see Compatibility with OpenShift Container Platform versions.
1.3.4.7. User access to extension resources
After a cluster extension has been installed and is being managed by Operator Lifecycle Manager (OLM) v1, the extension can often provide CustomResourceDefinition
objects (CRDs) that expose new API resources on the cluster. Cluster administrators typically have full management access to these resources by default, whereas non-cluster administrator users, or regular users, might lack sufficient permissions.
OLM v1 does not automatically configure or manage role-based access control (RBAC) for regular users to interact with the APIs provided by installed extensions. Cluster administrators must define the required RBAC policy to create, view, or edit these custom resources (CRs) for such users.
For more information, see User access to extension resources.
1.3.4.8. Runtime validation of container images using sigstore signatures in OLM v1 (Technology Preview)
Starting in OpenShift Container Platform 4.18, OLM v1 support for handling runtime validation of sigstore signatures for container images is available as a Technology Preview (TP) feature.
1.3.4.9. OLM v1 known issues
Operator Lifecycle Manager (OLM) v1 does not support the OperatorConditions
API introduced in OLM (Classic).
If an extension relies on only the OperatorConditions
API to manage updates, the extension might not install correctly. Most extensions that rely on this API fail at start time, but some might fail during reconciliation.
As a workaround, you can pin your extension to a specific version. When you want to update your extension, consult the extension’s documentation to find out when it is safe to pin the extension to a new version.
1.3.4.10. Deprecation of SiteConfig v1
SiteConfig v1 is deprecated starting with OpenShift Container Platform 4.18. Equivalent and improved functionality is now available through the SiteConfig Operator using the ClusterInstance
custom resource. For more information, see the Red Hat Knowledge Base solution Procedure to transition from SiteConfig CRs to the ClusterInstance API.
For more information about the SiteConfig Operator, see SiteConfig.
1.3.5. Hosted control planes
Because hosted control planes releases asynchronously from OpenShift Container Platform, it has its own release notes. For more information, see Hosted control planes release notes.
1.3.6. IBM Power
The IBM Power® release on OpenShift Container Platform 4.18 adds improvements and new capabilities to OpenShift Container Platform components.
This release introduces support for the following features on IBM Power:
- Added four new data centers to PowerVS Installer Provisioned Infrastructure deployments
-
Adding compute nodes to on-premise clusters using OpenShift CLI (
oc
)
1.3.7. IBM Z and IBM LinuxONE
With this release, IBM Z® and IBM® LinuxONE are now compatible with OpenShift Container Platform 4.18. You can perform the installation with z/VM, LPAR, or Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM). For installation instructions, see Installation methods.
Compute nodes must run Red Hat Enterprise Linux CoreOS (RHCOS).
IBM Z and IBM LinuxONE notable enhancements
The IBM Z® and IBM® LinuxONE release on OpenShift Container Platform 4.18 adds improvements and new capabilities to OpenShift Container Platform components and concepts.
This release introduces support for the following features on IBM Z® and IBM® LinuxONE:
-
Adding compute nodes to on-premise clusters using OpenShift CLI (
oc
)
IBM Power, IBM Z, and IBM LinuxONE support matrix
Starting in OpenShift Container Platform 4.14, Extended Update Support (EUS) is extended to the IBM Power® and the IBM Z® platform. For more information, see the OpenShift EUS Overview.
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
Adding compute nodes to on-premise clusters using OpenShift CLI ( | Supported | Supported |
Alternate authentication providers | Supported | Supported |
Agent-based Installer | Supported | Supported |
Assisted Installer | Supported | Supported |
Automatic Device Discovery with Local Storage Operator | Unsupported | Supported |
Automatic repair of damaged machines with machine health checking | Unsupported | Unsupported |
Cloud controller manager for IBM Cloud® | Supported | Unsupported |
Controlling overcommit and managing container density on nodes | Unsupported | Unsupported |
CPU manager | Supported | Supported |
Cron jobs | Supported | Supported |
Descheduler | Supported | Supported |
Egress IP | Supported | Supported |
Encrypting data stored in etcd | Supported | Supported |
FIPS cryptography | Supported | Supported |
Helm | Supported | Supported |
Horizontal pod autoscaling | Supported | Supported |
Hosted control planes | Supported | Supported |
IBM Secure Execution | Unsupported | Supported |
Installer-provisioned Infrastructure Enablement for IBM Power® Virtual Server | Supported | Unsupported |
Installing on a single node | Supported | Supported |
IPv6 | Supported | Supported |
Monitoring for user-defined projects | Supported | Supported |
Multi-architecture compute nodes | Supported | Supported |
Multi-architecture control plane | Supported | Supported |
Multipathing | Supported | Supported |
Network-Bound Disk Encryption - External Tang Server | Supported | Supported |
Non-volatile memory express drives (NVMe) | Supported | Unsupported |
nx-gzip for Power10 (Hardware Acceleration) | Supported | Unsupported |
oc-mirror plugin | Supported | Supported |
OpenShift CLI ( | Supported | Supported |
Operator API | Supported | Supported |
OpenShift Virtualization | Unsupported | Supported |
OVN-Kubernetes, including IPsec encryption | Supported | Supported |
PodDisruptionBudget | Supported | Supported |
Precision Time Protocol (PTP) hardware | Unsupported | Unsupported |
Red Hat OpenShift Local | Unsupported | Unsupported |
Scheduler profiles | Supported | Supported |
Secure Boot | Unsupported | Supported |
Stream Control Transmission Protocol (SCTP) | Supported | Supported |
Support for multiple network interfaces | Supported | Supported |
The | Supported | Supported |
Three-node cluster support | Supported | Supported |
Topology Manager | Supported | Unsupported |
z/VM Emulated FBA devices on SCSI disks | Unsupported | Supported |
4K FCP block device | Supported | Supported |
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
Persistent storage using iSCSI | Supported [1] | Supported [1],[2] |
Persistent storage using local volumes (LSO) | Supported [1] | Supported [1],[2] |
Persistent storage using hostPath | Supported [1] | Supported [1],[2] |
Persistent storage using Fibre Channel | Supported [1] | Supported [1],[2] |
Persistent storage using Raw Block | Supported [1] | Supported [1],[2] |
Persistent storage using EDEV/FBA | Supported [1] | Supported [1],[2] |
- Persistent shared storage must be provisioned by using either Red Hat OpenShift Data Foundation or other supported storage protocols.
- Persistent non-shared storage must be provisioned by using local storage, such as iSCSI, FC, or by using LSO with DASD, FCP, or EDEV/FBA.
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
cert-manager Operator for Red Hat OpenShift | Supported | Supported |
Cluster Logging Operator | Supported | Supported |
Cluster Resource Override Operator | Supported | Supported |
Compliance Operator | Supported | Supported |
Cost Management Metrics Operator | Supported | Supported |
File Integrity Operator | Supported | Supported |
HyperShift Operator | Supported | Supported |
IBM Power® Virtual Server Block CSI Driver Operator | Supported | Unsupported |
Ingress Node Firewall Operator | Supported | Supported |
Local Storage Operator | Supported | Supported |
MetalLB Operator | Supported | Supported |
Network Observability Operator | Supported | Supported |
NFD Operator | Supported | Supported |
NMState Operator | Supported | Supported |
OpenShift Elasticsearch Operator | Supported | Supported |
Vertical Pod Autoscaler Operator | Supported | Supported |
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
Bridge | Supported | Supported |
Host-device | Supported | Supported |
IPAM | Supported | Supported |
IPVLAN | Supported | Supported |
Feature | IBM Power® | IBM Z® and IBM® LinuxONE |
---|---|---|
Cloning | Supported | Supported |
Expansion | Supported | Supported |
Snapshot | Supported | Supported |
1.3.8. Insights Operator
1.3.8.1. Insights Runtime Extractor (Technology Preview)
In this release, the Insights Operator introduces the workload data collection Insights Runtime Extractor feature to help Red Hat better understand the workload of your containers. Available as a Technology Preview, the Insights Runtime Extractor feature gathers runtime workload data and sends it to Red Hat. Red Hat uses the collected runtime workload data to gain insights that can help you make investment decisions that will drive and optimize how you use your OpenShift Container Platform containers. For more information, see Enabling features using feature gates.
1.3.8.2. Rapid Recommendations
In this release, enhancements have been made to the Rapid Recommendations mechanism for remotely configuring the rules that determine the data that the Insights Operator collects.
The Rapid Recommendations feature is version-independent, and builds on the existing conditional data gathering mechanism.
The Insights Operator connects to a secure remote endpoint service running on console.redhat.com to retrieve definitions that contain the rules for determining which container log messages are filtered and collected by Red Hat.
The conditional data-gathering definitions get configured through an attribute named conditionalGathererEndpoint
in the pod.yml
configuration file.
conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules
In earlier iterations, the rules for determining the data that the Insights Operator collects were hard-coded and tied to the corresponding OpenShift Container Platform version.
The preconfigured endpoint URL now provides a placeholder (%s
) for defining a target version of OpenShift Container Platform.
1.3.8.3. More data collected and recommendations added
The Insights Operator now gathers more data to detect the following scenarios, which other applications can use to generate remedial recommendations to proactively manage your OpenShift Container Platform deployments:
-
Collects resources from the
nmstate.io/v1
API group.
-
Collects data from
clusterrole.rbac.authorization.k8s.io/v1
instances.
1.3.9. Installation and update
1.3.9.1. New version of the Cluster API Provider IBM Cloud
The installation program now uses a newer version of the Cluster API Provider IBM Cloud provider that includes Transit Gateway fixes. Because of the cost of Transit Gateways in IBM Cloud, you can now use the OpenShift Container Platform to create a Transit Gateway when creating an OpenShift Container Platform cluster. For more information, see (OCPBUGS-37588) and (OCPBUGS-41938).
1.3.9.2. Configuring the ovn-kubernetes
join subnet during cluster installation
With this release, you can configure the IPv4 join subnet that is used internally by ovn-kubernetes
when installing a cluster. You can set the internalJoinSubnet
parameter in the install-config.yaml
file and deploy the cluster into an existing Virtual Private Cloud (VPC).
For more information, see Network configuration parameters.
1.3.9.3. Introducing the oc adm upgrade recommend command (Technology Preview)
When updating your cluster, the oc adm upgrade
command returns a list of the next available versions. As long as you are using 4.18 oc
client binary, you can use the oc adm upgrade recommend
command to narrow down your suggestions and recommend a new target release before you launch your update. This feature is available for OpenShift Container Platform version 4.16 and newer clusters that are connected to an update service.
For more information, see Updating a cluster by using the CLI
1.3.9.4. Support for Nutanix Cloud Clusters (NC2) on Amazon Web Services (AWS) and NC2 on Microsoft Azure
With this release, you can install OpenShift Container Platform on Nutanix Cloud Clusters (NC2) on AWS or NC2 on Azure.
For more information, see Infrastructure requirements.
1.3.9.5. Installing a cluster on Google Cloud Platform using the C4 and C4A machine series
With this release, you can deploy a cluster on GCP using the C4 and C4A machine series for compute or control plane machines. The supported disk type of these machines is hyperdisk-balanced
. If you use an instance type that requires Hyperdisk storage, all of the nodes in your cluster must support Hyperdisk storage, and you must change the default storage class to use Hyperdisk storage.
For more information about configuring machine types, see Installation configuration parameters for GCP, C4 machine series (Compute Engine docs), and C4A machine series (Compute Engine docs).
1.3.9.6. Provide your own private hosted zone when installing a cluster on Google Cloud Platform
With this release, you can provide your own private hosted zone when installing a cluster on GCP into a shared VPC. If you do, the requirements for the bring your own (BYO) zone are that the zone must use a DNS name such as <cluster_name>.<base_domain>.
and that you bind the zone to the VPC network of the cluster.
For more information, see Prerequisites for installing a cluster on GCP into a shared VPC and Prerequisites for installing a cluster into a shared VPC on GCP using Deployment Manager templates.
1.3.9.7. Installing a cluster on Nutanix by using a preloaded RHCOS image object
With this release, you can install a cluster on Nutanix by using the named, preloaded RHCOS image object from the private cloud or the public cloud. Rather than creating and uploading a RHCOS image object for each OpenShift Container Platform cluster, you can use the preloadedOSImageName
parameter in the install-config.yaml
file.
For more information, see Additional Nutanix configuration parameters.
1.3.9.8. Single-stack IPv6 clusters on RHOSP
You can now deploy single-stack IPv6 clusters on RHOSP.
You must configure RHOSP prior to deploying your OpenShift Container Platform cluster. For more information, see Configuring a cluster with single-stack IPv6 networking.
1.3.9.9. Installing a cluster on Nutanix with multiple subnets
With this release, you can install a Nutanix cluster with more than one subnet for the Prism Element into which you are deploying an OpenShift Container Platform cluster.
For more information, see Configuring failure domains and Additional Nutanix configuration parameters.
For an existing Nutanix cluster, you can add multiple subnets by using compute or control plane machine sets.
1.3.9.10. Installing a cluster on VMware vSphere with multiple network interface controllers (Technology Preview)
With this release, you can install a VMware vSphere cluster with multiple network interface controllers (NICs) for a node.
For more information, see Configuring multiple NICs.
For an existing vSphere cluster, you can add multiple subnets by using compute machine sets.
1.3.9.11. Configuring 4 and 5 node control planes with the Agent-based Installer
With this release, if you are using the Agent-based Installer, you can now configure your cluster to be installed with either 4 or 5 nodes in the control plane. This feature is enabled by setting the controlPlane.replicas
parameter to either 4
or 5
in the install-config.yaml
file.
For more information, see Optional configuration parameters for the Agent-based Installer.
1.3.9.12. Minimal ISO image support for the Agent-based Installer
With this release, the Agent-based Installer supports creating a minimal ISO image on all supported platforms. Previously, minimal ISO images were supported only on the external
platform.
This feature is enabled using the minimalISO
parameter in the agent-config.yaml
file.
For more information, see Optional configuration parameters for the Agent-based Installer.
1.3.9.13. Internet Small Computer System Interface (iSCSI) boot support for the Agent-based Installer
With this release, the Agent-based Installer supports creating assets that can be used to boot an OpenShift Container Platform cluster from an iSCSI target.
For more information, see Preparing installation assets for iSCSI booting.
1.3.10. Machine Config Operator
1.3.10.1. Updated boot images for AWS clusters promoted to GA
Updated boot images has been promoted to GA for Amazon Web Services (AWS) clusters. For more information, see Updated boot images.
1.3.10.2. Expanded image config nodes information (Technology Preview)
The image config nodes custom resource, that you can use to monitor the progress of machine configuration updates to nodes, now presents more information on the update. The output of the oc get machineconfignodes
command now reports on the following and other conditions. You can use these statuses to follow the update, or troubleshoot the node if it experiences an error during the update:
- If each node was cordoned and uncordoned
- If each node was drained
- If each node was rebooted
- If a node had a CRI-O reload
- If a node had the operating system and node files updated
1.3.10.3. On-cluster layering changes (Technology Preview)
There are several important changes to the on-cluster layering feature:
-
You can now install extensions onto an on-cluster customer layered image by using a
MachineConfig
object. -
Updating the Containerfile in a
MachineOSConfig
object now triggers a build to be performed. -
You can now revert an on-cluster custom layered image back to the base image by removing a label from the
MachineOSConfig
object. -
The
must-gather
for the Machine Config Operator now includes data on theMachineOSConfig
andMachineOSBuild
objects.
For more information about on-cluster layering, see Using on-cluster layering to apply a custom layered image.
1.3.11. Management console
1.3.11.1. Checkbox for enabling cluster monitoring is marked by default
With this update, the checkbox for enabling cluster monitoring is now checked by default when installing the OpenShift Lightspeed Operator. (OCPBUGS-42381)
1.3.12. Monitoring
The in-cluster monitoring stack for this release includes the following new and modified features:
1.3.12.1. Updates to monitoring stack components and dependencies
This release includes the following version updates for in-cluster monitoring stack components and dependencies:
- Metrics Server to 0.7.2
- Prometheus to 2.55.1
- Prometheus Operator to 0.78.1
- Thanos to 0.36.1
1.3.12.2. Added scrape and evaluation intervals for user workload monitoring Prometheus
With this update, you can configure the intervals between consecutive scrapes and between rule evaluations for Prometheus for user workload monitoring.
1.3.12.3. Added early validation for the monitoring configurations in monitoring config maps
This update introduces early validation for changes to monitoring configurations in cluster-monitoring-config
and user-workload-monitoring-config
config maps to provide shorter feedback loops and enhance user experience.
1.3.12.4. Added the proxy environment variables to Alertmanager containers
With this update, Alertmanager uses the proxy environment variables. Therefore, if you configured an HTTP cluster-wide proxy, you can enable proxying by setting the proxy_from_environment
parameter to true
in your alert receivers or at the global config level in Alertmanager.
1.3.12.5. Added cross-project user workload alerting and recording rules
With this update, you can create user workload alerting and recording rules that query multiple projects at the same time.
1.3.12.6. Correlating cluster metrics with RHOSO metrics
You can now correlate observability metrics for clusters that run on Red Hat OpenStack Services on OpenShift (RHOSO). By collecting metrics from both environments, you can monitor and troubleshoot issues across the infrastructure and application layers.
For more information, see Monitoring clusters that run on RHOSO.
1.3.13. Network Observability Operator
The Network Observability Operator releases updates independently from the OpenShift Container Platform minor version release stream. Updates are available through a single, rolling stream which is supported on all currently supported versions of OpenShift Container Platform 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator is found in the Network Observability release notes.
1.3.14. Networking
1.3.14.1. Holdover in a grandmaster clock with GNSS as the source
With this release, you can configure the holdover behavior in a grandmaster (T-GM) clock with Global Navigation Satellite System (GNSS) as the source. Holdover allows the T-GM clock to maintain synchronization performance when the GNSS source is unavailable. During this period, the T-GM clock relies on its internal oscillator and holdover parameters to reduce timing disruptions.
You can define the holdover behavior by configuring the following holdover parameters in the PTPConfig
custom resource (CR):
-
MaxInSpecOffset
-
LocalHoldoverTimeout
-
LocalMaxHoldoverOffSet
For more information, see Holdover in a grandmaster clock with GNSS as the source.
1.3.14.2. Support for configuring a multi-network policy for IPVLAN and Bond CNI
With this release, you can configure a multi-network policy for the following network types:
- IP Virtual Local Area Network (IPVLAN)
- Bond Container Network Interface (CNI) over SR-IOV
For more information, see Configuring multi-network policy
1.3.14.3. Updated terminology for whitelist and blacklist annotations
The terminology for the ip_whitelist
and ip_blacklist
annotations have been updated to ip_allowlist
and ip_denylist
, respectively. Currently, OpenShift Container Platform still supports the ip_whitelist
and ip_blacklist
annotations. However, these annotations are planned for removal in a future release.
1.3.14.4. Checking OVN-Kubernetes network traffic with OVS sampling using the CLI
OVN-Kubernetes network traffic can be viewed with OVS sampling via the CLI for the following network APIs:
-
NetworkPolicy
-
AdminNetworkPolicy
-
BaselineNetworkPolicy
-
UserDefinedNetwork
isolation -
EgressFirewall
- Multicast ACLs.
Checking OVN-Kubernetes network traffic with OVS sampling using the CLI is intended to help with packet tracing. It can also be used while the Network Observability Operator is installed.
For more information, see Checking OVN-Kubernetes network traffic with OVS sampling using the CLI.
1.3.14.5. User-defined network segmentation (Generally Available)
With OpenShift Container Platform 4.18, user-defined network segmentation is generally available. User-defined networks (UDN) introduce enhanced network segmentation capabilities by allowing administrators to define custom network topologies using namespace-scoped UserDefinedNetwork and cluster-scoped ClusterUserDefinedNetwork custom resources.
With UDNs, administrators can create tailored network topologies with enhanced isolation, IP address management for workloads, and advanced networking features. Supporting both Layer 2 and Layer 3 topology types, user-defined network segmentation enables a wide range of network architectures and topologies, enhancing network flexibility, security, and performance. For more information on supported features, see UDN support matrix.
Use cases of UDN include providing virtual machines (VMs) with a lifetime duration for static IP addresses assignment as well as a Layer 2 primary pod network so that users can live migrate VMs between nodes. These features are all fully equipped in OpenShift Virtualization. Users can use UDNs to create a stronger, native multi-tenant environment, allowing you to secure your overlay Kubernetes network, which is otherwise open by default. For more information, see About user-defined networks.
1.3.14.6. The dynamic configuration manager is enabled by default (Technology Preview)
You can reduce your memory footprint by using the dynamic configuration manager on Ingress Controllers. The dynamic configuration manager propagates endpoint changes through a dynamic API. This process enables the underlying routers to adapt to changes (scale ups and scale downs) without reloads.
To use the dynamic configuration manager, enable the TechPreviewNoUpgrade
feature set by running the following command:
$ oc patch featuregates cluster -p '{"spec": {"featureSet": "TechPreviewNoUpgrade"}}' --type=merge
1.3.14.7. Additional environments for the network flow matrix
With this release, you can view network information for ingress flows to OpenShift Container Platform services in the following environments:
- OpenShift Container Platform on bare metal
- Single-node OpenShift on bare metal
- OpenShift Container Platform on Amazon Web Services (AWS)
- Single-node OpenShift on AWS
For more information, see OpenShift Container Platform network flow matrix.
1.3.14.8. MetalLB updates for Border Gateway Protocol
With this release, MetalLB includes a new field for the Border Gateway Protocol (BGP) peer custom resource. You can use the dynamicASN
field to detect the Autonomous System Number (ASN) to use for the remote end of a BGP session. This is an alternative to explicitly setting an ASN in the spec.peerASN
field.
1.3.14.9. Configuring an RDMA subsytem for SR-IOV
With this release, you can configure a Remote Direct Memory Access (RDMA) Container Network Interface (CNI) on Single Root I/O Virtualization (SR-IOV) to enable high-performance, low-latency communication between containers. When you combine RDMA with SR-IOV, you provide a mechanism to expose hardware counters of Mellanox Ethernet devices to be used inside Data Plane Development Kit (DPDK) applications.
1.3.14.10. Support configuring the SR-IOV Network Operator on a Secure-Boot-enabled environment for Mellanox cards
With this release, you can configure the Single Root I/O Virtualization (SR-IOV) Network Operator when the system has secure boot enabled. The SR-IOV Operator is configured after you first manually configure the firmware for Mellanox devices. With secure boot enabled, the resilience of your system is enhanced, and a crucial layer of defense for the overall security of your computer is provided.
For more information, see Configuring the SR-IOV Network Operator on Mellanox cards when Secure Boot is enabled.
1.3.14.11. Support for pre-created RHOSP floating IP addresses in the Ingress Controller
With this release, you can now specify pre-created floating IP addresses in the Ingress Controller for your clusters running on RHOSP.
For more information, see Specifying a floating IP address in the Ingress Controller.
1.3.14.12. SR-IOV Network Operator support extension
The SR-IOV Network Operator now supports Intel NetSec Accelerator Cards and Marvell Octeon 10 DPUs. (OCPBUGS-43451)
1.3.14.13. Using a Linux bridge interface as the OVS default port connection
The OVN-Kubernetes plugin can now use a Linux bridge interface as the Open vSwitch (OVS) default port connection. This means that a network interface controller, such as SmartNIC, can now bridge the underlying network with a host. (OCPBUGS-39226)
1.3.14.14. Cluster Network Operator exposing network overlap metrics for an issue
When you start the limited live migration method and an issue exists with network overlap, the Cluster Network Operator (CNO) can now expose network overlap metrics for the issue. This is possible because the openshift_network_operator_live_migration_blocked
metric now includes the new NetworkOverlap
label. (OCPBUGS-39096)
1.3.14.15. Network attachments support dynamic reconfiguration
Previously, the NetworkAttachmentDefinition
CR was immutable. With this release, you can edit an existing NetworkAttachmentDefinition
CR. Support for editing makes it easier to accommodate changes in the underlying network infrastructure, such as adjusting the MTU of a network interface.
You must ensure that the configurations of each NetworkAttachmentDefinition
CR that reference the same network name
and type: ovn-k8s-cni-overlay
are in sync. Only when these values are in sync is the network attachment update successful. If the configurations are not in sync, the behavior is undefined because there is no guarantee about which NetworkAttachmentDefinition
CR OpenShift Container Platform uses for the configuration.
You still must restart any workloads that use the network attachment definition for the network changes to take effect for those pods.
1.3.15. Nodes
1.3.15.1. crun is now the default container runtime
crun is now the default container runtime for new containers created in OpenShift Container Platform. The runC runtime is still supported and you can change the default runtime to runC, if needed. For more information on crun, see About the container engine and container runtime. For information on changing the default to runC, see Creating a ContainerRuntimeConfig CR to edit CRI-O parameters.
Updating from OpenShift Container Platform 4.17.z to OpenShift Container Platform 4.18 does not change your container runtime.
1.3.15.2. sigstore support (Technology Preview)
Available as a Technology Preview, you can use the sigstore project with OpenShift Container Platform to improve supply chain security. You can create signature policies at the cluster-wide level or for a specific namespace. For more information, see Manage secure signatures with sigstore.
1.3.15.3. Enhancements to process for adding nodes
Enhancements have been added to the process for adding worker nodes to an on-premise cluster that was introduced in OpenShift Container Platform 4.17. With this release, you can now generate Preboot Execution Environment (PXE) assets instead of an ISO image file, and you can configure reports to be generated regardless of whether the node creation process fails or not.
1.3.15.4. Node Tuning Operator properly selects kernel arguments
The Node Tuning Operator can now properly select kernel arguments and management options for Intel and AMD CPUs. (OCPBUGS-43664)
1.3.15.5. Default container runtime is not always set properly
The default container runtime that is set by the cluster Node Tuning Operator is always inherited from the cluster, and is not hard-coded by the Operator. Starting with this release, the default value is crun
. (OCPBUGS-45450)
1.3.16. OpenShift CLI (oc)
1.3.16.1. oc-mirror plugin v2 (Generally Available)
oc-mirror plugin v2 is now generally available. To use it, add the --v2
flag when running oc-mirror commands. The previous version (oc-mirror plugin v1), which runs when the --v2
flag is not set, is now deprecated. It is recommended to transition to oc-mirror plugin v2 for continued support and improvements.
For more information, see Mirroring images for a disconnected installation by using the oc-mirror plugin v2.
oc-mirror plugin v2 now supports mirroring helm charts. Also, oc-mirror plugin v2 can now be used in environments where HTTP/S
proxy is enabled, ensuring broader compatibility with enterprise setups.
oc-mirror plugin v2 introduces v1 retro-compatible filtering of Operator catalogs and generates filtered catalogs. This feature allows cluster administrators to view only the Operators that have been mirrored, rather than the complete list from the origin catalog.
1.3.17. Operator lifecycle
1.3.17.1. Existing version of Operator Lifecycle Manager now known as OLM (Classic)
With the release of Operator Lifecycle Manager (OLM) v1 as a General Availability (GA) feature, starting in OpenShift Container Platform 4.18, the existing version of OLM that has been included since the launch of OpenShift Container Platform 4 is now known as OLM (Classic).
OLM (Classic) remains enabled by default and fully supported throughout the OpenShift Container Platform 4 lifecycle.
For more information on the GA release of OLM v1, see the Extensions (OLM v1) release note sections. For full documentation focused on OLM v1, see the stand-alone Extensions guide.
For full documentation focused on OLM (Classic), continue referring to the Operators guide.
1.3.18. Machine management
1.3.18.1. Managing machines with the Cluster API for Microsoft Azure (Technology Preview)
This release introduces the ability to manage machines by using the upstream Cluster API, integrated into OpenShift Container Platform, as a Technology Preview for Microsoft Azure clusters. This capability is in addition or an alternative to managing machines with the Machine API. For more information, see About the Cluster API.
1.3.19. Oracle(R) Cloud Infrastructure (OCI)
1.3.19.1. Bare-metal support on Oracle(R) Cloud Infrastructure (OCI)
OpenShift Container Platform cluster installations on Oracle® Cloud Infrastructure (OCI) are now supported for bare-metal machines. You can install bare-metal clusters on OCI by using either the Assisted Installer or the Agent-based Installer. To install a bare-metal cluster on OCI, choose one of the following installation options:
1.3.20. Postinstallation configuration
1.3.20.1. Migrating the x86 control plane to arm64 architecture on Amazon Web Services
With this release, you can migrate the control plane in your cluster from x86
to arm64
architecture on Amazon Web Services (AWS). For more information, see Migrating the x86 control plane to arm64 architecture on Amazon Web Services.
1.3.20.2. Configuring the image stream import mode behavior (Technology Preview)
This feature introduces a new field, imageStreamImportMode
, in the image.config.openshift.io/cluster
resource. The imageStreamImportMode
field controls the import mode behavior of image streams. You can set the imageStreamImportMode
field to either of the following values:
-
Legacy
-
PreserveOriginal
For more information, see Image controller configuration parameters.
You must enable the TechPreviewNoUpgrade
feature set in the FeatureGate
custom resource (CR) to enable the imageStreamImportMode
feature. For more information, see Understanding feature gates.
1.3.21. Red Hat Enterprise Linux CoreOS (RHCOS)
1.3.21.1. RHCOS uses RHEL 9.4
RHCOS uses Red Hat Enterprise Linux (RHEL) 9.4 packages in OpenShift Container Platform 4.18. These packages ensure that your OpenShift Container Platform instances receive the latest fixes, features, enhancements, hardware support, and driver updates.
1.3.22. Registry
Read-only registry enhancements
In previous versions of OpenShift Container Platform, storage mounted as read-only returned no specific metrics or information about storage errors. This could result in silent failures of a registry when the storage backend was read-only. With this release, the following alerts have been added to return storage information when the backend is set to read-only:
Alert Name | Message |
---|---|
| The image registry storage is read-only and no images will be committed to storage. |
| The image registry storage disk is full and no images will be committed to storage. |
1.3.23. Scalability and performance
1.3.23.1. Cluster validation with the cluster-compare plugin
The cluster-compare
plugin is an OpenShift CLI (oc
) plugin that compares a cluster configuration with a target configuration. The plugin reports configuration differences while suppressing expected variations by using configurable validation rules and templates.
For example, the plugin can highlight unexpected differences, such as mismatched field values, missing resources, or version discrepancies, while ignoring expected differences, such as optional components or hardware-specific fields. This focused comparison makes it easier to assess cluster compliance with the target configuration.
You can use the cluster-compare
plugin in development, production, and support scenarios.
For more information about the cluster-compare
plugin, see Overview of the cluster-compare plugin.
1.3.23.2. Node Tuning Operator: Deferred Tuning Updates
In this release, the Node Tuning Operator introduces support for deferring tuning updates. Administrators can schedule updates to be applied during a maintenance window with this feature.
For more information, see Deferring application of tuning changes.
1.3.23.3. NUMA Resources Operator now uses default SELinux policy
With this release, the NUMA Resources Operator no longer creates a custom SELinux policy to enable the installation of Operator components on a target node. Instead, the Operator uses a built-in container SELinux policy. This change removes the additional node reboot that was previously required when applying a custom SELinux policy during an installation.
In clusters with an existing NUMA-aware scheduler configuration, upgrading to OpenShift Container Platform 4.18 might result in an additional reboot for each configured node. For further information about how to manage an upgrade in this scenario and limit disruption, see the Red Hat Knowledgebase article Managing an upgrade to OpenShift Container Platform 4.18 or later for a cluster with an existing NUMA-aware scheduler configuration
1.3.23.4. Node Tuning Operator platform detection
With this release, when you apply a performance profile, the Node Tuning Operator detects the platform and configures kernel arguments and other platform-specific options accordingly. This release adds support for detecting the following platforms:
- AMD64
- AArch64
- Intel 64
1.3.23.5. Support for worker nodes with AMD EPYC Zen 4 CPUs
With this release, you can use the PerformanceProfile
custom resource (CR) to configure worker nodes on machines equipped with AMD EPYC Zen 4 CPUs (Genoa and Bergamo). These CPUs are fully supported.
The per pod power management feature is not functional on AMD EPYC Zen 4 CPUs.
1.3.24. Storage
1.3.24.1. Over-provisioning ratio update after LVMCluster custom resource creation
Previously, the thinPoolConfig.overprovisionRatio
field in the LVMCluster
custom resource (CR) could be configured only during the creation of the LVMCluster
CR. With this release, you can now update the thinPoolConfig.overprovisionRatio
field even after creating the LVMCluster
CR.
1.3.24.2. Support for configuring metadata size for the thin pool
This feature provides the following new optional fields in the LVMCluster
custom resource (CR):
-
thinPoolConfig.metadataSizeCalculationPolicy
: Specifies the policy to calculate the metadata size for the underlying volume group. You can set this field to eitherStatic
orHost
. By default, this field is set toHost
. -
thinPoolConfig.metadataSize
: Specifies the metadata size for the thin pool. You can configure this field only when theMetadataSizeCalculationPolicy
field is set toStatic
.
For more information, see About the LVMCluster custom resource.
1.3.24.3. Persistent storage using CIFS/SMB CSI Driver Operator is generally available
OpenShift Container Platform is capable of provisioning persistent volumes (PVs) with a Container Storage Interface (CSI) driver for the Common Internet File System (CIFS) dialect/Server Message Block (SMB) protocol. The CIFS/SMB CSI Driver Operator that manages this driver was introduced in OpenShift Container Platform 4.16 with Technology Preview status. In OpenShift Container Platform 4.18, it is now generally available.
For more information, see CIFS/SMB CSI Driver Operator.
1.3.24.4. Secret Store CSI Driver Operator is generally available
The Secrets Store Container Storage Interface (CSI) Driver Operator, secrets-store.csi.k8s.io
, allows OpenShift Container Platform to mount multiple secrets, keys, and certificates stored in enterprise-grade external secrets stores into pods as an inline ephemeral volume. The Secrets Store CSI Driver Operator communicates with the provider using gRPC to fetch the mount contents from the specified external secrets store. After the volume is attached, the data in it is mounted into the container’s file system. The Secrets Store CSI Driver Operator was available in OpenShift Container Platform 4.14 as a Technology Preview feature. OpenShift Container Platform 4.18 introduces this feature as generally available.
For more information about the Secrets Store CSI driver, see Secrets Store CSI Driver Operator.
For information about using the Secrets Store CSI Driver Operator to mount secrets from an external secrets store to a CSI volume, see Providing sensitive data to pods by using an external secrets store.
1.3.24.5. Persistent volume last phase transition time parameter is generally available
OpenShift Container Platform 4.16 introduced a new parameter, LastPhaseTransitionTime
, which has a timestamp that is updated every time a persistent volume (PV) transitions to a different phase (pv.Status.Phase
). For OpenShift Container Platform 4.18, this feature is generally available.
For more information about using the persistent volume last phase transition time parameter, see Last phase transition time.
1.3.24.6. Multiple vCenter support for vSphere CSI is generally available
OpenShift Container Platform 4.17 introduced the ability to deploy OpenShift Container Platform across multiple vSphere clusters (vCenters) as a Technology Preview feature. In OpenShift Container Platform 4.18, Multiple vCenter support is now generally available.
For more information, see Multiple vCenter support for vSphere CSI and Installation configuration parameters for vSphere.
1.3.24.7. Always honor persistent volume reclaim policy (Technical Preview)
Prior to OpenShift Container Platform 4.18, the persistent volume (PV) reclaim policy was not always applied.
For a bound PV and persistent volume claim (PVC) pair, the ordering of PV-PVC deletion determined whether the PV delete reclaim policy was applied or not. The PV applied the reclaim policy if the PVC was deleted prior to deleting the PV. However, if the PV was deleted prior to deleting the PVC, then the reclaim policy was not applied. As a result of that behavior, the associated storage asset in the external infrastructure was not removed.
With OpenShift Container Platform 4.18, the PV reclaim policy is consistently always applied. This feature has Technical Preview status.
For more information, see Reclaim policy for persistent volumes.
1.3.24.8. Improved ability to easily remove LVs or LVSs for LSO is generally available
For the Local Storage Operator (LSO), OpenShift Container Platform 4.18 improves the ability to remove Local Volumes (LVs) and Local Volume Sets (LVSs) by automatically removing artifacts, thus reducing the number of steps required.
For more information, see Removing a local volume or local volume set.
1.3.24.9. CSI volume group snapshots (Technology Preview)
OpenShift Container Platform 4.18 introduces Container Storage Interface (CSI) volume group snapshots as a Technology Preview feature. This feature needs to be supported by the CSI driver. CSI volume group snapshots use a label selector to group multiple persistent volume claims (PVCs) for snapshotting. A volume group snapshot represents copies from multiple volumes that are taken at the same point-in-time. This can be useful for applications that contain multiple volumes.
OpenShift Data Foundation supports volume group snapshots.
For more information about CSI volume group snapshots, see CSI volume group snapshots.
1.3.24.10. GCP PD CSI driver supports the C3 instance type for bare metal and N4 machine series is generally available
The Google Cloud Platform Persistent Disk (GCP PD) Container Storage Interface (CSI) driver supports the C3 instance type for bare metal and N4 machine series. The C3 instance type and N4 machine series support the hyperdisk-balanced disks.
Additionally, hyperdisk storage pools are supported for large-scale storage. A hyperdisk storage pool is a purchased collection of capacity, throughput, and IOPS, which you can then provision for your applications as needed.
For OpenShift Container Platform 4.18, this feature is generally available.
For more information, see C3 instance type for bare metal and N4 machine series.
1.3.24.11. OpenStack Manila expanding persistent volumes is generally available
In OpenShift Container Platform 4.18, OpenStack Manila supports expanding Container Storage Interface (CSI) persistent volumes (PVs). This feature is generally available.
For more information, see Expanding persistent volumes and CSI drivers supported by OpenShift Container Platform.
1.3.24.12. GCP Filestore supporting Workload Identity is generally available
In OpenShift Container Platform 4.18, Google Compute Platform (GCP) Filestore Container Storage Interface (CSI) storage supports Workload Identity. This allows users to access Google Cloud resources using federated identities instead of a service account key. For OpenShift Container Platform 4.18, this feature is generally available.
For more information, see Google Compute Platform Filestore CSI Driver Operator.
1.3.25. Web console
1.3.25.1. Administrator perspective
This release introduces the following updates to the Administrator perspective of the web console:
- A new setting for hiding the Getting started resources card on the Overview page allowing for maximum use of the dashboard.
-
A Start Job option was added to the CronJob List and Details pages, so you can start individual CronJobs manually directly in the web console without having to use the
oc
CLI. - The Import YAML button in the masthead is now a Quick Create button that you can use for the rapid deployment of workloads by imprting from YAML, Git, or using container images.
- You can build your own generative-AI chat bot with a chat bot sample. The generative-AI chat bot sample is deployed with Helm and includes a full CI/CD pipeline. You can also run this sample on your cluster with no CPUs.
- You can import YAML into the console using OpenShift Lightspeed.
1.3.25.1.1. Content Security Policy (CSP)
With this release, the console Content Security Policy (CSP) is deployed in report-only mode. CSP violations will be logged in the browser console, but the associated CSP directives will not be enforced. Dynamic plugin creators can add their own policies.
Additionally, you can report any plugins that break security policies. Administrators have the ability to disable any plugin breaking those policies. CSP violations will be logged in the browser console, but the associated CSP directives will not be enforced. This feature is behind a feature-gate
, so you will need to manually enable it.
For more information, see Content Security Policy (CSP) and Enabling feature sets using the web console.
1.3.25.2. Developer Perspective
This release introduces the following updates to the Developer perspective of the web console:
- Added a OpenShift Container Platform toolkit, Quarkus tools and JBoss EAP, and a Language Server Protocol Plugin for Visual Studio Code and IntelliJ.
- Previously, when moving from light mode to dark mode in the Monaco editor, the console remained in dark mode. With this update, the Monaco code editor will match the selected theme.
1.4. Notable technical changes
Uninstalling the SR-IOV Network Operator changed
From OpenShift Container Platform 4.18, to successfully uninstall the SR-IOV Network Operator, you need to delete the sriovoperatorconfigs
custom resource and custom resource definition too.
For more information, see Uninstalling the SR-IOV Network Operator.
Changes to the iSCSI initiator name and service
Previously, the /etc/iscsi/initiatorname.iscsi
file was present by default on RHCOS images. With this release, the initiatorname.iscsi
file is no longer present by default. Instead, it is created at run time when the iscsi.service
and subsequent iscsi-init.service
services start. This service is not enabled by default and might affect any CSI drivers that rely on reading the contents of the initiatorname.iscsi
file prior to starting the service.
Operator SDK 1.38.0
OpenShift Container Platform 4.18 supports Operator SDK 1.38.0. See Installing the Operator SDK CLI to install or update to this latest version.
Operator SDK 1.38.0 now supports Kubernetes 1.30 and uses Kubebuilder v4.
Metrics endpoints are now secured using native Kubebuilder metrics configuration instead of kube-rbac-proxy
, which is now removed.
The following support has also been removed from Operator SDK:
- Scaffolding tools for Hybrid Helm-based Operator projects
- Scaffolding tools for Java-based Operator projects
If you have Operator projects that were previously created or maintained with Operator SDK 1.36.1, update your projects to keep compatibility with Operator SDK 1.38.0:
1.5. Deprecated and removed features
Some features available in previous releases have been deprecated or removed.
Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within OpenShift Container Platform 4.18, refer to the table below. Additional details for more functionality that has been deprecated and removed are listed after the table.
In the following tables, features are marked with the following statuses:
- Not Available
- Technology Preview
- General Availability
- Deprecated
- Removed
Bare metal monitoring deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Bare Metal Event Relay Operator | Deprecated | Removed | Removed |
Images deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Cluster Samples Operator | Deprecated | Deprecated | Deprecated |
Installation deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
| Deprecated | Deprecated | Deprecated |
CoreDNS wildcard queries for the | Deprecated | Deprecated | Deprecated |
| Deprecated | Deprecated | Deprecated |
| Deprecated | Deprecated | Deprecated |
| Deprecated | Deprecated | Deprecated |
Package-based RHEL compute machines | Deprecated | Deprecated | Deprecated |
Managing machines with the Cluster API for Microsoft Azure | Not Available | Not Available | Technology Preview |
| Deprecated | Deprecated | Deprecated |
Installing a cluster on AWS with compute nodes in AWS Outposts | Deprecated | Deprecated | Deprecated |
Machine management deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Managing machine with Machine API for Alibaba Cloud | Removed | Removed | Removed |
Cloud controller manager for Alibaba Cloud | Removed | Removed | Removed |
Networking deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
OpenShift SDN network plugin | Deprecated | Removed | Removed |
iptables | Deprecated | Deprecated | Deprecated |
Node deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
| Deprecated | Deprecated | Deprecated |
Kubernetes topology label | Deprecated | Deprecated | Deprecated |
Kubernetes topology label | Deprecated | Deprecated | Deprecated |
cgroup v1 | Deprecated | Deprecated | Deprecated |
OpenShift CLI (oc) deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
oc-mirror plugin v1 | General Availability | General Availability | Deprecated |
Operator lifecycle and development deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Operator SDK | Deprecated | Deprecated | Deprecated |
Scaffolding tools for Ansible-based Operator projects | Deprecated | Deprecated | Deprecated |
Scaffolding tools for Helm-based Operator projects | Deprecated | Deprecated | Deprecated |
Scaffolding tools for Go-based Operator projects | Deprecated | Deprecated | Deprecated |
Scaffolding tools for Hybrid Helm-based Operator projects | Deprecated | Deprecated | Removed |
Scaffolding tools for Java-based Operator projects | Deprecated | Deprecated | Removed |
SQLite database format for Operator catalogs | Deprecated | Deprecated | Deprecated |
Storage deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
AliCloud Disk CSI Driver Operator | General Availability | Removed | Removed |
Shared Resources CSI Driver Operator | Technical Preview | Deprecated | Removed |
Web console deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Patternfly 4 | Deprecated | Deprecated | Deprecated |
React Router 5 | Deprecated | Deprecated | Deprecated |
Workloads deprecated and removed features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
| Deprecated | Deprecated | Deprecated |
1.5.1. Removed features
1.5.1.1. The Shared Resource CSI Driver is removed
The Shared Resource CSI Driver feature was deprecated in OpenShift Container Platform 4.17, and is now removed from OpenShift Container Platform 4.18. This feature is now generally available in Builds for Red Hat OpenShift 1.1. To use this feature, ensure you are using Builds for Red Hat OpenShift 1.1 or later.
1.5.1.2. The selected bundles feature is removed in oc-mirror v2
The selected bundles feature is removed from the oc-mirror v2 Generally Available release. This change prevents issues where specifying the wrong Operator bundle version could break the Operators in a cluster. (OCPBUGS-49419)
1.5.2. Notice of future deprecation
1.5.2.1. Future Kubernetes API removals
The next minor release of OpenShift Container Platform is expected to use Kubernetes 1.32. Kubernetes 1.32 removed a deprecated API.
See the Deprecated API Migration Guide in the upstream Kubernetes documentation for the list of planned Kubernetes API removals.
See Navigating Kubernetes API deprecations and removals for information about how to check your cluster for Kubernetes APIs that are planned for removal.
1.6. Bug fixes
API Server and Authentication
- Previously, API validation did not prevent an authorized client from decreasing the current revision of a static pod operand, such as kube-apiserver, or prevent the operand from progressing concurrently on two nodes. With this release, requests that attempt to do either are now rejected. (OCPBUGS-48502)
- Previously, the oauth-server would crash when configuring an oath identity provider (IDP) with a callback path that contained spaces. With this release, the issue is resolved.(OCPBUGS-44099)
Bare Metal Hardware Provisioning
-
Previously, the Bare Metal Operator (BMO) created the
HostFirmwareComponents
custom resource for all Bare Metal hosts (BMH), including ones based on the intelligent platform management interface (IPMI), which did not support it. With this release,HostFirmwareComponents
custom resources are only created for BMH that support it. (OCPBUGS-49699) -
Previously, in bare-metal configurations where the provisioning network is disabled but the
bootstrapProvisioningIP
field is set, the bare-metal provisioning components might fail to start. These failures occur when the provisioning process reconfigures the external network interface on the bootstrap VM during the process of pulling container images. With this release, dependencies were added to ensure that interface reconfiguration only occurs when the network is idle, preventing conflicts with other processes. As a result, the bare-metal provisioning components now start reliably, even when thebootstrapProvisioningIP
field is set and the provisioning network is disabled. (OCPBUGS-36869) -
Previously, Ironic inspection failed if special or invalid characters existed in the serial number of a block device. This occurred because the
lsblk
command failed to escape the characters. With this release, the command now escapes the characters so this issue no longer persists. (OCPBUGS-36492) - Previously, a check for unexpected IP addresses on the provisioning interface during metal3 pod startup was triggered. This issue occurred because of the presence of an IP addresses supplied by DHCP from a previous version of the pod that existed on another node. With this release, a pod startup check now looks only for IP addresses that exist outside the provisioning network subnet, so that a metal3 pod starts immediately, even when if the node has moved to a different node. (OCPBUGS-38507)
-
Previously, enabling a provisioning network by editing the cluster-wide
Provisioning
resource was only possible on installer-provisioned infrastructure clusters with platform typebaremetal
. On bare metal, single-node OpenShift, and user-provisioned infrastructure clusters, editing this resource resulted in a validation error. With this release, the excessive validation check has been removed, and enabling a provisioning network is now possible on bare-metal clusters with platform typenone
. As with installer-provisioned infrastructure clusters, users are responsible for making sure that all networking requirements are met for this operation. (OCPBUGS-43371)
Cloud Compute
-
Previously, the availability set fault domain count was hardcoded to
2
. This value works in most regions in Microsoft Azure because the fault domain counts are typically at least2
, but failed in thecentraluseuap
andeastusstg
regions. With this release, the availability set fault domain count in a region is set dynamically. (OCPBUGS-48659) -
Previously, an updated zone API error message from Google Cloud Platform (GCP) with increased granularity caused the machine controller to mistakenly mark the machine as valid with a temporary cloud error instead of recognizing it as an invalid machine configuration error. This prevented the invalid machine from transitioning to a
failed
state. With this update, the machine controller handles the new error messages correctly, and machines with an invalid zone or project ID now transition properly to a failed state. (OCPBUGS-47790) -
Previously, the certificate signing request (CSR) approver included certificates from other systems within its calculations for whether it was overwhelmed and should stop approving certificates. In larger clusters, with other subsystems using CSRs, the CSR approver counted unrelated unapproved CSRs towards its total and prevented further approvals. With this release, the CSR approver only includes CSRs that it can approve, by using the
signerName
property as a filter. As a result, the CSR approver only prevents new approvals when there are a large number of unapproved CSRs for the relevantsignerName
values. (OCPBUGS-46425) - Previously, some cluster autoscaler metrics were not initialized, and therefore were not available. With this release, these metrics are initialized and available. (OCPBUGS-46416)
- Previously, if an informer watch stream missed an event because of a temporary disconnection, the informer might return a special signal type after it reconnected to the network, especially when the informer recognizes that an EndpointSlice object was deleted during the temporary disconnection. The returned signal type indicated that the state of the event has stalled and that the object was deleted. The returned signal type was not accurate and might have caused confusion for a OpenShift Container Platform user. With this release, the Cloud Controller Manager (CCM) handles unexpected signal types so that OpenShift Container Platform users do not receive confusing information from returned types. (OCPBUGS-45972)
-
Previously, when the AWS DHCP option set was configured to use a custom domain name that contains a trailing period (
.
), OpenShift Container Platform installation failed. With this release, the logic that extracts the hostname of EC2 instances and turns them into Kubelet node names is updated to trim trailing periods so that the resulting Kubernetes object name is valid. Trailing periods in the DHCP option set no longer cause installation to fail. (OCPBUGS-45889) -
Previously, installation of an AWS cluster failed in certain environments on existing subnets when the
publicIp
parameter for theMachineSet
object was explicity set tofalse
. With this release, a configuration value set forpublicIp
no longer causes issues when the installation program provisions machines for your AWS cluster in certain environment. (OCPBUGS-45130) -
Previously, enabling a provisioning network by editing the cluster-wide Provisioning resource was only possible on clusters with platform type
baremetal
, such as ones created by the IPI installer. On baremetal SNO and UPI clusters that would result in a validation error. The excessive validation has been removed, and enabling a provisioning network is now possible on baremetal clusters with platform typenone
. As with IPI, users are responsible for making sure that all networking requirements are met for this operation. (OCPBUGS-43371) -
Previously, the installation program populated the
network.devices
,template
, andworkspace
fields in thespec.template.spec.providerSpec.value
section of the VMware vSphere control plane machine set custom resource (CR). These fields should be set in the vSphere failure domain, and the installation program populating them caused unintended behaviors. Updating these fields did not trigger an update to the control plane machines, and these fields were cleared when the control plane machine set was deleted. With this release, the installation program is updated to no longer populate values that are included in the failure domain configuration. If these values are not defined in a failure domain configuration, for instance on a cluster that is updated to OpenShift Container Platform 4.18 from an earlier version, the values defined by the installation program are used. (OCPBUGS-42660) -
Previously, the cluster autoscaler would occasionally leave a node with a
PreferNoSchedule
taint during deletion. With this release, the maximum bulk deletion limit is disabled so that nodes with this taint no longer remain after deletion. (OCPBUGS-42132) - Previously, the Cloud Controller Manager (CCM) liveness probe used on IBM Cloud cluster installations could not use loopback and this caused the probe to continuously restart. With this release, the probe can use loopback so that this issue not longer occurs. (OCPBUGS-41936)
- Previously, the approval mechanism for certificate signing requests (CSRs) failed because the node name and internal DNS entry for a CSR did not match in terms of character case differences. With this release, an update to the approval mechanism for CSRs skips case-sensitive checks so that a CSR with a matching node name and internal DNS entry does not fail the check because of character case differences. (OCPBUGS-36871)
- Previously, the cloud node manager had permission to update any node object when it needed to update only the node on which it was running. With this release, restrictions have been put in place to prevent the node manager from one node updating the node object of another node.(OCPBUGS-22190)
Cloud Credential Operator
-
Previously, the
aws-sdk-go-v2
software development kit (SDK) failed to authenticate anAssumeRoleWithWebIdentity
API operation on an Amazon Web Services (AWS) Security Token Service (STS) cluster. With this release,pod-identity-webhook
now includes a default region so that this issue no longer persists. (OCPBUGS-45937) - Previously, secrets in the cluster were fetched in a single call. When there were a large number of secrets, this caused the API to time out. With this release, the Cloud Credential Operator fetches secrets in batches limited to 100 secrets. This change prevents timeouts when there are large number of secrets in the cluster. (OCPBUGS-39531)
Cluster Resource Override Admission Operator
-
Previously, if you specified the
forceSelinuxRelabel
field in aClusterResourceOverride
custom resource (CR), and then modified it afterwards, the change would not be reflected in theclusterresourceoverride-configuration
config map, which is used to apply the SELinux re-labeling workaround feature. With this update, the Cluster Resource Override Operator can track the change to theforceSelinuxRelabel
feature in order to reconcile the config map object. As a result, the config map object is correctly updated when you change theClusterResourceOverride
CR field. (OCPBUGS-48692)
Cluster Version Operator
- Previously, a custom security context constraint (SCC) impacted any pod that was generated by the Cluster Version Operator from receiving a cluster version upgrade. With this release, OpenShift Container Platform now sets a default SCC to each pod, so that any custom SCC created does not impact a pod. (OCPBUGS-46410)
-
Previously, the Cluster Version Operator (CVO) did not filter internal errors that were propogated to the
ClusterVersion Failing
condition message. As a result, errors that did not negatively impact the update were shown in theClusterVersion Failing
condition message. With this release, the errors that are propogated to theClusterVersion Failing
condition message are filtered. (OCPBUGS-15200)
Developer Console
-
Previously, if a
PipelineRun
was using a resolver, rerunning thatPipelineRun
resulted in an error. With this fix, a user can rerunPipelineRun
if it is using resolver. (OCPBUGS-45228) -
Previously, on the if you edited a deployment config in Form view, the
ImagePullSecrets
values were duplicated. With this update, editing the form does not add duplicate entries. (OCPBUGS-45227) -
Previously, when you searched on the
OperatorHub
or another catalog, you would experience periods of latency between each key press. With this update, the input on the catalog search bars are debounced. (OCPBUGS-43799) - Previously, no option existed to close the Getting started resources section in the Administrator perspective. With this change, user can close the Getting started resources section. (OCPBUGS-38860)
- Previously, when cronjobs were created, the creation of pods happens too quickly, causing the component that fetches new pods off the cronjob to fail. With this update, a 3 second delay was added before starting to fetch the pods of the cronjob. (OCPBUGS-37584)
-
Previously, resources created when a new user is created were not removed automatically when the user was deleted. This caused clutter on the cluster with configuration maps, roles, and role-bindings. With this update,
ownerRefs
was added to the resources, so they are cleared once the user is deleted and the cluster no longer clutters with users. (OCPBUGS-37560) -
Previously, when importing a Git repository using the serverless import strategy, the environment variables from the
func.yaml
were not automatically loaded into the form. With this update, the environment variables are now loaded upon import. (OCPBUGS-34764) - Previously, users would erroneously see an option to import a repository using the pipeline build strategy when the devfile import strategy was selected; however, this was not possible. With this update, the pipeline strategy has been removed when the devfile import strategy is selected. (OCPBUGS-32526)
- Previously, when using a custom template, you could not enter multi-line parameters, such as private keys. With this release, you can switch between single-line and multi-line modes so you can fill out template fields with multi-line inputs. (OCPBUGS-23080)
Image Registry
Previously, you could not install a cluster on AWS in the
ap-southeast-5
region or other regions because the OpenShift Container Platform internal registry did not support these regions. With this release, the internal registry is updated to include the following regions so that this issue no longer occurs:-
ap-southeast-5
-
ap-southeast-7
-
ca-west-1
-
il-central-1
mx-central-1
-
-
Previously, when the Image Registry Operator was configured with
networkAccess: Internal
in Microsoft Azure, it would not be possible to successfully setmanagementState
toRemoved
in the Operator configuration. This occurred because of an authorization error when the Operator tried to delete the storage container. With this update, the Image Registry Operator continues with the deletion of the storage account, which automatically deletes the storage container, resulting in a successful change into theRemoved
state. (OCPBUGS-42732) - Previously, when configuring the image registry to use an Microsoft Azure storage account located in a resource group other than the cluster’s resource group, the Image Registry Operator would become degraded due to a validation error. This update changes the Image Registry Operator to allow for authentication by only storage account key without validating for other authentication requirements. (OCPBUGS-42514)
-
Previously, installation with the OpenShift installer used the cluster API. Virtual networks created by the cluster API use a different tag template. Consequently, setting
.spec.storage.azure.networkAccess.type: Internal
in the Image Registry Operator’sconfig.yaml
file resulted in the Image Registry Operator unable to discover the virtual network. With this update, the Image Registry Operator searches for both new and old tag templates, resolving the issue. (OCPBUGS-42196) - Previously, the image registry would, in some cases, panic when attempting to purge failed uploads from s3-compatible storage providers. This was caused by the image registry’s s3 driver mishandling empty directory paths. With this update, the image registry properly handles empty directory paths, fixing the panic. (OCPBUGS-39108)
Installer
- Previously, installing a cluster with a Dynamic Host Configuration Protocol (DHCP) network on Nutanix caused a failure. With this release, this issue is resolved. (OCPBUGS-38118)
- Previously, installing an AWS cluster in either the Commercial Cloud Services (C2S) region or the Secret Commercial Cloud Services (SC2S) region failed because the installation program added unsupported security groups to the load balancer. With this release, the installation program no longer adds unsupported security groups to the load balancer for a cluster that needs to be installed in either the C2S region or SC2S region. (OCPBUGS-33311)
- Previously, when installing a Google Cloud Platform (GCP) cluster where instances required that IP forwarding was not set, the installation failed. With this release, IP forwarding is disabled for all GCP machines and the issue is resolved. (OCPBUGS-49842)
-
Previously, when installing a cluster on AWS in existing subnets, for bring your own virtual private cloud (BYO VPC) in edge zones, the installation program did not tag the subnet edge resource with
kubernetes.io/cluster/<InfraID>:shared
. With this release, all subnets that are used in theinstall-config.yaml
file contain the required tags. (OCPBUGS-49792) -
Previously, a cluster that was created on Amazon Web Services (AWS) could fail to deprovision the cluster without the permissions to release the EIP address,
ec2:ReleaseAddress
. This issue occurred when the cluster was created with the minimum permissions in an existing virtual private cloud (VPC), including an unmanaged VPC or bring your own (BYO) VPC, and BYO Public IPv4 Pool address. With this release, theec2:ReleaseAddress
permission is exported to the Identity and Access Management (IAM) policy generated during installation. (OCPBUGS-49735) -
Previously, when installing a cluster on Nutanix, the installation program could fail with a timeout while uploading images to Prism Central. This occurred in some slower Prism Central environments when the Prism API attempted to load the Red Hat Enterprise Linux CoreOS (RHCOS) image. The Prism API call timeout value was 5 minutes. With this release, the Prism API call timeout value is a configurable parameter
platform.nutanix.prismAPICallTimeout
in theinstall-config.yaml
file and the default timeout value is 10 minutes. (OCPBUGS-49148) -
Previously, the
oc adm node-image monitor
command failed because of a temporary API server disconnection and then displayed an error or End of File message. With this release, the installation program ignores a temporary API server disconnection and the monitor command tries to connect to the API server again. (OCPBUGS-48714) - Previously, when you deleted backend service resources on Google Cloud Platform (GCP), some resources to be deleted were not found. For example, the associated forwarding rules, health checks, and firewall rules were not deleted. With this release, the installation program tries to find the backend service by name first, then searches for forwarding rules, health checks, and firewall rules before it determines if those results match a backend service. The algorithm for associating resources is reversed and the appropriate resources are deleted. There are no leaked backend service resources and the issue is resolved. When you delete a private cluster, the forwarding rules, backend services, health checks, and firewall rules created by the Ingress Operator are not deleted. (OCPBUGS-48611)
- Previously, OpenShift Container Platform was not compliant with PCI-DSS/BAFIN regulations. With this release, the cross-tenant object replication in Microsoft Azure is unavailable. Consequently, the chance of unauthorized data access is reduced and the strict adherence to data governance policies is ensured. (OCPBUGS-48118)
-
Previously, when you installed OpenShift Container Platform on Amazon Web Services (AWS) and specified an edge machine pool without an instance type, in some instances it caused the edge node to fail. With this release, if you specify an edge machine pool without an instance type you must use the permission
ec2:DescribeInstanceTypeOfferings
. The permission derives the correct instance type available, based on the AWS Local Zones or Wavelength Zones locations used. (OCPBUGS-47502) -
Previously, when the API server disconnected temporarily, the command
oc adm node-image monitor
reported an end of file (EOF) error. With this release, when the API server disconnects temporarily, the monitor command does not fail. (OCPBUGS-46391) -
Previously, when you specified the
HostedZoneRole
permission in theinstall-config.yaml
file while creating a shared Virtual Private Cloud (VPC), you also had to specify thests:AssumeRole
permission. Otherwise, it caused an error. With this release, if you specify theHostedZoneRole
permission the installation program validates that thests:AssumeRole
permission is present. (OCPBUGS-46046) -
Previously, when the
publicIpv4Pool
configuration parameter was used during installation the permissionsec2:AllocateAddress
andec2:AssociateAddress
were not validated. As a consequence, permission failures could occur during installation. With this release, the required permissions are validated before the cluster is installed and the issue is resolved. (OCPBUGS-45711) -
Previously, during a disconnected installation, when the
imageContentSources
parameter was configured for more than one mirror for a source, the command to create the agent ISO image could fail, depending on the sequence of the mirror configuration. With this release, multiple mirrors are handled correctly when the agent ISO is created and the issue is resolved. (OCPBUGS-45630) -
Previously, when installing a cluster using the Cluster API on installer-provisioned infrastructure, the user provided a
machineNetwork
parameter. With this release, the installation program uses a randommachineNetwork
parameter. (OCPBUGS-45485) -
Previously, during an installation on Amazon Web Services (AWS), the installation program used the wrong load balancer when searching for the
hostedZone
ID, which caused an error. With the release, the correct load balancer is used and the issue is resolved. (OCPBUGS-45301) - Previously, endpoint overrides in IBM Power Virtual Server were not conditional. As a consequence, endpoint overrides were created incorrectly and caused failures in Virtual Private Environments (VPE). With this release, endpoint overrides are conditional only for disconnected installations. (OCPBUGS-44922)
-
Previously, during a shared Virtual Private Cloud (VPC) installation, the installation program added the records to a private DNS zone created by the installation program instead of adding the records to the cluster’s private DNS zone. As a consequence, the installation failed. With this release, the installation program searches for an existing private DNS zone and, if found, pairs that zone with the network that is supplied by the
install-config.yaml
file and the issue is resolved. (OCPBUGS-44641) (OCPBUGS-44641) -
Previously, the
oc adm drain --delete-local-data
command was not supported in the 4.18oc
CLI tool. With this release, the command has been updated tooc adm drain --delete-emptydir-data
. (OCPBUGS-44318) -
Previously, US East (
wdc04
), US South (dal13
), Sydney (syd05
), and Toronto (tor01
) regions were not supported for IBM Power Virtual Server. With this release, these regions, which includePowerEdgeRouter
(PER) capabilities, are supported for IBM Power Virtual Server. (OCPBUGS-44312) - Previously, during a Google Cloud Platform (GCP) installation, when the installation program was creating filters with large numbers of returned data, for example for subnets, it exceeded the quota for the maximum number times that a resource can be filtered in a specific period. With this release, all relevant filtering is moved to the client so that the filter quotas are not exceeded and the issue is resolved. (OCPBUGS-44193)
-
Previously, during an Amazon Web Services (AWS) installation, the installation program validated all the tags in the
install-config.yaml
file only when you setpropogateTags
to true. With this release, the installation program validates all the tags in theinstall-config.yaml
file. (OCPBUGS-44171) -
Previously, if the
RendezvousIP
value matched a substring in thenext-hop-address
field of a compute node configuration, it reported a validation error. TheRendezvousIP
value must match a control plane host address only. With this release, a substring comparison forRendezvousIP
value is used against a control plane host address only, so that the error no longer exists. (OCPBUGS-44167) -
Previously, when you deleted a cluster in IBM Power Virtual Server, the Transit Gateway connections were cleaned up. With this release, if the
tgName
parameter is set, Red Hat OpenStack Platform (RHOSP) does not clean up the Transit Gateway connection when you delete a cluster. (OCPBUGS-44162) - Previously, when installing a cluster on an IBM platform and adding an existing VPC to the cluster, the Cluster API Provider IBM Cloud would not add ports 443, 5000, and 6443 to the security group of the VPC. This situation prevented the VPC from being added to the cluster. With this release, a fix ensures that the Cluster API Provider IBM Cloud adds the ports to the security group of the VPC so that the VPC gets added to your cluster. (OCPBUGS-44068)
-
Previously, the Cluster API Provider IBM Cloud module was very verbose. With this release, the verbosity of the module is reduced, and this will affect the output of the
.openshift_install.log
file. (OCPBUGS-44022) - Previously, when you deployed a cluster on a IBM Power Virtual Server zone, the load balancers were slow to create. As a consequence, the cluster failed. With this release, the Cluster API Provider IBM Cloud no longer has to wait until all load balancers are ready and the issue is resolved. (OCPBUGS-43923)
- Previously, for the Agent-based Installer, all host validation status logs referred to the name of the first registered host. As a consequence, when a host validation failed, it was not possible to determine the problem host. With this release, the correct host is identified in each log message and now the host validation logs correctly show the host to which they correspond, and the issue is resolved. (OCPBUGS-43768)
-
Previously, when you used the
oc adm node-image create
command to generate the image while running the Agent-based Installer and the step fails, the accompanying error message did not show the container log. Theoc adm node-image create
command uses a container to generate the image. When the image generation step fails, the basic error message does not show the underlying issue that caused the image generation failure. With this release, to help troubleshooting, theoc adm node-image create
command now shows the container log, so the underlying issue is displayed. (OCPBUGS-43757) -
Previously, the Agent-based Installer failed to parse the
cloud_controller_manager
parameter in theinstall-config.yaml
configuration file. This resulted in the Assisted Service API failing because it received an empty string, and this in turn caused the installation of the cluster to fail on Oracle® Cloud Infrastructure (OCI). With this release, an update to the parsing logic ensures that the Agent-based Installer correctly interprets thecloud_controller_manager
parameter so that the Assisted Service API receives the correct string value. As a result, the Agent-based Installer can now installer a cluster on OCI. (OCPBUGS-43674) -
Previously, an update to Azure SDK for Go removed the
SendCertificateChain
option and this changed the behavior of sending certificates. As a consequence, the full certificate chain was not sent. With this release, the option to send a full certification chain is available and the issue is resolved. (OCPBUGS-43567) -
Previously, when installing a cluster on Google Cloud Platform (GCP) using the Cluster API implementation, the installation program did not distinguish between internal and external load balancers while creating firewall rules. As a consequence, the firewall rule for internal load balancers was open to all IP address sources, that is,
0.0.0.0/0
. With this release, the Cluster API Provider GCP is updated to restrict firewall rules to the machine CIDR when using an internal load balancer. The firewall rule for internal load balancers is correctly limited to machine networks, that is, nodes in the cluster and the issue is resolved. (OCPBUGS-43520) - Previously, when installing a cluster on IBM Power Virtual Server, the required security group rules were not created. With this release, the missing security group rules for installation are identified and created and the issue is resolved. (OCPBUGS-43518)
-
Previously, when you tried to add a compute node with the
oc adm node-image
command by using an instance that was previously created with Red Hat OpenStack Platform (RHOSP), the operation failed. With this release, the issue is resolved by correctly setting the user-managed networking configuration. (OCPBUGS-43513) - Previously, when destroying a cluster on Google Cloud Platform (GCP), a forwarding rule incorrectly blocked the installation program. As a consequence, the destroy process failed to complete. With this release, the issue is resolved by the installation program setting its state correctly and marking all destroyed resources as deleted. (OCPBUGS-42789)
- Previously, when configuring the Agent-Based Installer installation in a disconnected environment with more than one mirror for the same source, the installation might fail. This occurred because one of the mirrors was not checked. With this release, all mirrors are used when multiple mirrors are defined for the same source and the issue is resolved. (OCPBUGS-42705)
-
Previously, you could not change the
AdditionalTrustBundlePolicy
parameter in theinstall-config.yaml
file for the Agent-based Installer. The parameter was always set toProxyOnly
. With this release, you can setAdditionalTrustBundlePolicy
to other values, for example,Always
. By default, the parameter is set toProxyOnly
. (OCPBUGS-42670) -
Previously, when you installed a cluster and tried to add a compute node with the
oc adm node-image
command, it failed because the date, time, or both might have been inaccurate. With this release, the issue is resolved by applying the same Network Time Protocol (NTP) configuration in the target clusterMachineConfig
chrony resource to the node ephemeral live environment (OCPBUGS-42544) -
Previously, during installation the name of the artifact that the
oc adm node-image create
command generated did not include<arch>
in its file name. As a consequence, the file name was inconsistent with other generated ISOs. With this release, a patch fixes the name of the artifact that is generated by theoc adm node-image create
command by also including the referenced architecture as part of the file name and the issue is resolved. (OCPBUGS-42528) -
Previously, the Agent-based Installer set the
assisted-service
object to a debug logging mode. Unintentionally, thepprof
module in theassisted-service
object, which uses port6060
, was then turned on. As a consequence, there was a port conflict and the Cloud Credential Operator (CCO) did not run. When requested by the VMware vSphere Cloud Controller Manager (CCM), vSphere secrets were not generated, the RHOSP CCM failed to initialize the nodes, and the cluster installation was blocked. With this release, thepprof
module in theassisted-service
object does not run when invoked by the Agent-based Installer. As a result, the CCO runs correctly and cluster installations on vSphere that use the Agent-based Installer succeed. (OCPBUGS-42525) - Previously, when a compute node was trying to join a cluster the rendezvous node rebooted before the process completed. As the compute node could not communicate as expected with the rendezvous node, the installation was not successful. With this release, a patch is applied that fixes the racing condition that caused the rendezvous node to reboot prematurely and the issue is resolved. (OCPBUGS-41811)
-
Previously, when using the Assisted Installer, selecting a multi-architecture image for
s390x
CPU architecture on Red Hat Hybrid Cloud Console could cause the installation to fail. The installation program reported an error that the new cluster was not created because the skip MCO reboot was not compatible withs390x
CPU architecture. With this release, the issue is resolved. (OCPBUGS-41716) - Previously, a coding issue caused the Ansible script on RHOSP user-provisioned infrastructure installation to fail during the provisioning of compact clusters. This occurred when IPv6 was enabled for a three-node cluster. With this release, the issue is resolved and you can provision compact three-node clusters. (OCPBUGS-41538)
- Previously, a coding issue caused the Ansible script on RHOSP user-provisioned installation infrastructure to fail during the provisioning of compact clusters. This occurred when IPv6 was enabled for a three-node cluster. With this release, the issue is resolved and you can provision compact three-node clusters on RHOSP for user-provisioned installation infrastructure. (OCPBUGS-39402)
-
Previously, the order of an Ansible Playbook was modified to run before the
metadata.json
file was created, which caused issues with older versions of Ansible. With this release, the playbook is more tolerant of missing files to accommodate older versions of Ansible and the issue is resolved. (OCPBUGS-39285) -
Previously, when you installed a cluster there were issues using a compute node because the date, time, or both might have been inaccurate. With this release, a patch is applied to the live ISO time synchronization. The patch configures the
/etc/chrony.conf
file with the list of the additional Network Time Protocol (NTP) servers that the user provides in theagent-config.yaml
file, so that you can use a compute node without experiencing a cluster installation issue. (OCPBUGS-39231) - Previously, when installing a cluster on bare metal using installer-provisioned infrastructure, the installation could time out if the network to the bootstrap virtual machine is slow. With this update, the timeout duration has been increased to cover a wider range of network performance scenarios. (OCPBUGS-39081)
-
Previously, the
oc adm node-image create
command failed when run against a cluster in a restricted environment with a proxy because the command ignored the cluster-wide proxy setting. With this release, when the command is run it includes the cluster proxy resource settings, if available, to ensure the command is run successfully and the issue is resolved. (OCPBUGS-38990) - Previously, when installing a cluster on Google Cloud Platform (GCP) into a shared Virtual Private Cloud (VPC) with a bring your own (BYO) hosted zone, the installation could fail due to an error creating the private managed zone. With this release, a fix ensures that where there is a preexisting private managed zone the installation program skips creating a new one and the issue is resolved. (OCPBUGS-38966)
- Previously, an installer-provisioned installation on VMware vSphere to run OpenShift Container Platform 4.16 in a disconnected environment failed when the template could not be downloaded. With this release, the template is downloaded correctly and the issue is resolved. (OCPBUGS-38918)
-
Previously, during installation the
oc adm node-image create
command used thekube-system/cluster-config-v1
resource to determine the platform type. With this release, the installation program uses the infrastructure resource, which provides more accurate information about the platform type. (OCPBUGS-38802) - Previously, a rare condition on VMware vSphere Cluster API machines caused the vCenter session management to time out unexpectedly. With this release, the Keep Alive support is disabled in the current and later versions of Cluster API Provider vSphere, and the issue is resolved. (OCPBUGS-38657)
-
Previously, when a folder was undefined and the data center was located in a data center folder, a wrong folder structure was created starting from the root of the vCenter server. By using the
Govmomi DatacenterFolders.VmFolder
, it used the wrong path. With this release, the folder structure uses the data center inventory path and joins it with the virtual machine (VM) and cluster ID value, and the issue is resolved. (OCPBUGS-38599) - Previously, the installation program on Google Cloud Platform (GCP) filtered addresses to find and delete internal addresses only. The addition of Cluster API Provider Google Cloud Platform (GCP) provisioned resources included changes to address resources. With this release, Cluster API Provider GCP creates external addresses and these must be included in a cluster cleanup operation. (OCPBUGS-38571)
-
Previously, if you specified an unsupported architecture in the
install-config.yaml
file the installation program would fail with aconnection refused
message. With this update, the installation program correctly validates that the specified cluster architecture is compatible with OpenShift Container Platform, leading to successful installations. (OCPBUGS-38479) -
Previously, when you used the Agent-based Installer to install a cluster,
assisted-installer-controller
timed out or exited the installation process depending on whetherassisted-service
was unavailable on the rendezvous host. This situation caused the cluster installation to fail during CSR approval checks. With this release, an update toassisted-installer-controller
ensures that the controller does not timeout or exit ifassisted-service
is unavailable. The CSR approval check now works as expected. (OCPBUGS-38466) - Previously, installing a cluster with a Dynamic Host Configuration Protocol (DHCP) network on Nutanix caused a failure. With this release, this issue is resolved. (OCPBUGS-388118)
-
Previously, when the VMware vSphere vCenter cluster contained an ESXi host that did not have a standard port group defined and the installation program tried to select that host to import the OVA, the import failed and the error
Invalid Configuration for device 0
was reported. With this release, the installation program verifies whether a standard port group for an ESXi host is defined and, if not, continues until it locates an ESXi host with a defined standard port group, or reports an error message if it fails to locate one, resolving the issue. (OCPBUGS-37945) - Previously, due to an EFI Secure Boot failure in the SCOS, when the FCOS pivoted to the SCOS the virtual machine (VM) failed to boot. With this release, the Secure Boot is disabled only when the Secure Boot is enabled in the `coreos.ovf ` configuration file, and the issue is resolved (OCPBUGS-37736)
- Previously, when deprecated and supported fields were used with the installation program on VMware vSphere a validation error message was reported. With this release, warning messages are added specifying that using deprecated and supported fields are not recommended with the installation program on VMware vSphere. (OCPBUGS-37628)
-
Previously, if you tried to install a second cluster using existing Azure Virtual Networks (VNet) on Microsoft Azure, the installation failed. Where the front end IP address of the API server load balancer was not specified, the Cluster API fixed the address to
10.0.0.100
. As this IP address was already taken by the first cluster, this resulted in the second load balancer failing to install. With this release, a dynamic IP address checks whether the default IP address is available. If it is unavailable, the dynamic IP selects the next available address and you can install the second cluster successfully with a different load balancer IP. (OCPBUGS-37442) - Previously, the installation program attempted to download the OVA on VMware vSphere whether the template field was defined or not. With this update, the issue is resolved. The installation program verifies if the template field is defined. If the template field is not defined, the OVA is downloaded. If the template field is defined, the OVA is not downloaded. (OCPBUGS-36494)
- Previously, when installing a cluster on IBM Cloud the installation program checked the first group of subnets, that is 50, only when searching for subnet details by name. With this release, pagination support is provided to search all subnets. (OCPBUGS-36236)
-
Previously, when installing Cluster API Provider Google Cloud Platform (GCP) into a shared Virtual Private Cloud (VPC) without the required permission
compute.firewalls.create
the installation failed because no firewall rules were created. With this release, a fix ensures that a rule to create the firewall is skipped during installation and the issue is resolved. (OCPBUGS-35262) Previously, for the Agent-Based installer, the networking layout defined through nmstate might result in a configuration error if all hosts do not have an entry in the interfaces section that matches an entry in the
networkConfig
section. However, if the entry in thenetworkConfig
section uses a physical interface name then the entry in the interfaces section is not required.This fix ensures that the configuration will not result in an error if an entry in the
networkConfig
section has a physical interface name and does not have a corresponding entry in the interfaces table. (OCPBUGS-34849)- Previously, the container tools module was enabled by default on the RHEL node. With this release, the container-tools module is disabled to install the correct package between conflicting repositories. (OCPBUGS-34844)
Insights Operator
- Previously, during entitled builds on a Red Hat OpenShift Container Platform cluster running on IBM Z hardware, repositories were not enabled. This issue has been resolved. You can now enable repositories during entitled builds on a Red Hat OpenShift Container Platform cluster running on IBM Z hardware. (OCPBUGS-32233)
Machine Config Operator
-
Previously, Red Hat Enterprise Linux (RHEL) CoreOS templates that were shipped by the Machine Config Operator (MCO) caused node scaling to fail on Red Hat OpenStack Platform (RHOSP). This issue happened because of an issue with
systemd
and the presence of a legacy boot image from older versions of OpenShift Container Platform. With this release, a patch fixes the issue withsystemd
and removes the legacy boot image, so that node scaling can continue as expected. (OCPBUGS-42324) - Previously, if you enabled on-cluster layering for your cluster and you attempted to configure kernel arguments in the machine configuration, machine config pools (MCPs) and nodes entered a degraded state. This happened because of a configuration mismatch. With this release, a check for kernel arguments for a cluster with OCL-enabled ensures that the arguments are configured and applied to nodes in the cluster. This update prevents any mismatch that previously occurred between the machine configuration and the node configuration. (OCPBUGS-34647)
Management Console
- Previously, clicking the "Don’t show again" link in the Lightspeed modal dialog did not correctly navigate to the general User Preference tab when one of the other User Preference tabs was displayed. After this update, clicking the "Don’t show again" link correctly navigates to the general User Preference tab. (OCPBUGS-48106)
- Previously, multiple external link icons might show in the primary action button of the OperatorHub modal. With this update, only a single external link icon appears. (OCPBUGS-47742)
-
Previously, the web console was disabled when the authorization type was set to
None
in the cluster authentication configuration. With this update, the web console no longer disables when the authorization type was set toNone
. (OCPBUGS-46068) -
Previously, the MachineConfig Details tab displayed an error when one or more
spec.config.storage.file
did not include optional data. With this update, the error no longer occurs and the Details tab renders as expected. (OCPBUGS-44049) - Previously, an extra name property was passed into resource list page extensions used to list related Operands on the CSV details page. As a result, the Operand list was filtered by the cluster service version (CSV) name and often returned an empty list. With this update, Operands are listed as expected. (OCPBUGS-42796)
- Previously, the Sample tab did not show when creating a new ConfigMap with one or more ConfigMap ConsoleYAMLSamples present on the cluster. After this update, the Sample tab shows with one or more ConfigMap ConsoleYAMLSamples present. (OCPBUGS-41492)
- Previously, the Events page resource type filter incorrectly reported the number of resources when three or more resources were selected. With this update, the filter always reports the correct number of resources. (OCPBUGS-38701)
- Previously, the version number text in the updates graph on the Cluster Settings page appeared as black text on a dark background while viewing the page using Firefox in dark mode. With this update, the text appears as white text. (OCPBUGS-37988)
- Previously, Alerting pages did not show resource information in their empty state. With this update, resource information is available on the Alerting pages. (OCPBUGS-36921)
- Previously, the Operator Lifecycle Manager (OLM) CSV annotation contained unexpected JSON, which was successfully parsed, but then threw a runtime error when attempting to use the resulting value. With this update, JSON values from OLM annotations are validated before use, errors are logged, and the console does not fail when unexpected JSON is received in an annotation. (OCPBUGS-35744)
- Previously, silenced alerts were visible on the Overview page of the OpenShift Container Platform web console. This occurred because the alerts did not include any external labels. With this release, silenced alerts include the external labels so they are filtered out and are not viewable. (OCPBUGS-31367)
Monitoring
-
Previously, if the SMTP
smarthost
orfrom
fields under theemailConfigs
object were not specified at the global or receiver level in theAlertmanagerConfig
custom resource (CR), Alertmanager would crash because these fields are required. With this release, the Prometheus Operator fails reconciliation if these fields are not specified. Therefore, the Prometheus Operator no longer pushes invalid configurations to Alertmanager, preventing it from crashing. (OCPBUGS-48050) -
Previously, the Cluster Monitoring Operator (CMO) did not mark configurations in
cluster-monitoring-config
anduser-workload-monitoring-config
config maps as invalid for unknown (for example, no longer supported) or duplicated fields. With this release, stricter validation is added that helps identify such errors. (OCPBUGS-42671) -
Previously, it was not possible for a user to query the user workload monitoring Thanos API endpoint with
POST
requests. With this update, a cluster admin can bind a newpod-metrics-reader
cluster role with a role binding or cluster role binding to allowPOST
queries for a user or service account. (OCPBUGS-41158) -
Previously, an invalid config map configuration for core platform monitoring, user workload monitoring, or both caused Cluster Monitoring Operator (CMO) to report an
InvalidConfiguration
error. With this release, if only the user workload monitoring configuration is invalid, CMO reportsUserWorkloadInvalidConfiguration
, making it clear where the issue is located. (OCPBUGS-33863) -
Previously,
telemeter-client containers
showed aTelemeterClientFailures Warnings
message in multiple clusters. With this release, a runbook is added for theTelemeterClientFailures
alert to explain the cause of the alert triggering and the alert provides resolution steps. (OCPBUGS-33285) -
Previously,
AlertmanagerConfig
objects with invalid child routes generated invalid Alertmanager configuration leading to Alertmanager disruption. With this release, Prometheus Operator rejects suchAlertmanagerConfig
objects, and users receive a warning about the invalid child routes in logs. (OCPBUGS-30122) -
Previously, the
config-reloader
for Prometheus for user-defined projects would fail if unset environment variables were used in theServiceMonitor
configuration, which resulted in Prometheus pods failing. With this release, the reloader no longer fails when an unset environment variable is encountered. Instead, unset environment variables are left as they are, while set environment variables are expanded as usual. Any expansion errors, suppressed or otherwise, can be tracked through thereloader_config_environment_variable_expansion_errors
variable. (OCPBUGS-23252)
Networking
- Previously, enabling encapsulated security payload (ESP) offload hardware when using IPSec on Open vSwitch attached interfaces would break connectivity in your cluster. To resolve this issue, OpenShift Container Platform by default disables ESP offload hardware on Open vSwitch attached interfaces. This fixes the issue. (OCPBUGS-42987)
-
Previously, if you deleted the default
sriovOperatorConfig
custom resource (CR), you could not recreate the defaultsriovOperatorConfig
CR, because theValidatingWebhookConfiguration
was not initially deleted. With this release, the Single Root I/O Virtualization (SR-IOV) Network Operator removes validating webhooks when you delete thesriovOperatorConfig
CR, so that you can create a newsriovOperatorConfig
CR. (OCPBUGS-41897) -
Previously, if you set custom annotations in a custom resource (CR), the SR-IOV Operator would override all the default annotations in the
SriovNetwork
CR. With this release, when you define custom annotations in a CR, the SR-IOV Operator does not override the default annotations. (OCPBUGS-41352) -
Previously, bonds that were configured in
active-backup
mode would have IPsec Encapsulating Security Payload (ESP) offload active even if underlying links did not support ESP offload. This caused IPsec associations to fail. With this release, ESP offload is disabled for bonds so that IPsec associations pass. (OCPBUGS-39438) -
Previously, the Machine Config Operator (MCO)'s vSphere
resolve-prepender
script usedsystemd
directives that were incompatible with old bootimage versions used in OpenShift Container Platform 4. With this release, nodes can scale using newer bootimage versions 4.18 4.13 and above, through manual intervention, or by upgrading to a release that includes this fix. (OCPBUGS-38012) -
Previously, the Ingress Controller status incorrectly displayed as
Degraded=False
because of a migration time issue with theCanaryRepetitiveFailures
condition. With this release, the Ingress Controller status is correctly marked asDegraded=True
for the appropriate length of time that theCanaryRepetitiveFailures
condition exists. (OCPBUGS-37491) - Previously, when a pod was running on a node on which egress IPv6 is assigned, the pod was not able to communicate with the Kubernetes service in a dual stack cluster. This resulted in the traffic with the IP family, that the egressIP is not applicable to, being dropped. With this release, only the source network address translation (SNAT) for the IP family that the egress IPs applied to is deleted, eliminating the risk of traffic being dropped. (OCPBUGS-37193)
- Previously, the Single-Root I/O Virtualization (SR-IOV) Operator did not expire the acquired lease during the Operator’s shutdown operation. This impacted a new instance of the Operator, because the new instance had to wait for the lease to expire before the new instance was operational. With this release, an update to the Operator shutdown logic ensures that the Operator expires the lease when the Operator is shutting down. (OCPBUGS-23795)
-
Previously, for an Ingress resource with an
IngressWithoutClassName
alert, the Ingress Controller did not delete the alert along with deletion of the resource. The alert continued to show on the OpenShift Container Platform web console. With this release, the Ingress Controller resets theopenshift_ingress_to_route_controller_ingress_without_class_name
metric to0
before the controller deletes the Ingress resource, so that the alert is deleted and no longer shows on the web console. (OCPBUGS-13181) -
Previously, when either the
clusterNetwork
orserviceNetwork
IP address pools overlapped with the defaulttransit_switch_subnet
100.88.0.0/16
IP address and the custom value oftransit_switch_subnet
did not take effect,ovnkube-node
pods crashed after the live migration operation. With this release, the custom value oftransit_switch_subnet
can be passed toovnkube node
pods, so that this issue no longer persists. (OCPBUGS-43740) -
Previously, a change in OVN-Kubernetes that standardized the
appProtocol
valueh2c
tokubernetes.io/h2c
was not recognized by OpenShift router. Consequently, specifyingappProtocol: kubernetes.io/h2c
on a service did not cause OpenShift router to use clear-text HTTP/2 to connect to the service endpoints. With this release, OpenShift router was changed to handleappProtocol: kubernetes.io/h2c
the same way as it handlesappProtocol: h2c
resolving the issue. (OCPBUGS-42972) -
Previously, instructions that guided the user after changing the
LoadBalancer
parameter fromExternal
toInternal
were missing for IBM Power Virtual Server, Alibaba Cloud, and Red Hat OpenStack Platform (RHOSP). This caused the Ingress Controller to be put in a permanentProgressing
state. With this release the messageThe IngressController scope was changed from Internal to External
is followed byTo effectuate this change, you must delete the service
resolving the permanentProgressing
state. (OCPBUGS-39151) - Previously, there was no event logged when an error occurred from failed conversion from ingress to route conversion. With this update, this error appear in the event logs. (OCPBUGS-29354)
-
Previously, an
ovnkube-node
pod on a node that uses cgroup v1 was failing because it could not find the kubelet cgroup path. With this release, anovnkube-node
pod no longer fails if the node uses cgroup v1. However, the OVN-Kubernetes network plugin outputs anUDNKubeletProbesNotSupported
event notification. If you enable cgroup v2 for each node, OVN-Kubernetes no longer outputs the event notification.(OCPBUGS-50513) - Previously, when you finished the live migration for a kubevirt virtual machine (VM) that uses the Layer 2 topology, an old node still transmits IPv4 egress traffic to the virtual machine. With this release, the OVN-Kubernetes plugin updates the gateway MAC address for a kubevirt virtual machine (VM) during the live migration process so that this issue no longer occurs. (OCPBUGS-49857)
- Previously, the DNS-based egress firewall incorrectly prevented creation of a firewall rule that contained a DNS name in uppercase characters. With this release, an fix to the egress firewall no longer prevents creation of a firewall rule that contains a DNS name in uppercase characters. (OCPBUGS-49589)
-
Previously, when you attempted to use the Cluster Network Operator (CNO) to upgrade a cluster with existing
localnet
networks,ovnkube-control-plane
pods failed to run. This happened because theovnkube-cluster-manager
container could not process an OVN-Kuberneteslocalnet
topology network that did not have subnets defined. With this release, a fix ensures that theovnkube-cluster-manager
container can process an OVN-Kuberneteslocalnet
topology network that does not have subnets defined. (OCPBUGS-44195) - Previously, the SR-IOV Network Operator could not retrieve metadata when cloud-native network (CNF) workers were deployed with a configuration drive on Red Hat OpenStack Platform (RHOSP). A configuration drive is often unmounted after a boot operation on immutable systems, so now the Operator dynamically mounts a configuration drive when required. The Operator can now retrieve the metadata and then unmount the configuration drive. This means that you no longer need to manually mount and unmount the configuration drive. (OCPBUGS-41829)
-
Previously, when you switched your cluster to use a different load balancer, the Ingress Operator did not remove the values from the
classicLoadBalancer
andnetworkLoadBalancer
parameters in theIngressController
custom resource (CR) status. This situation caused the status of the CR to report wrong information from theclassicLoadBalancer
andnetworkLoadBalancer
parameters. With this release, after you switch your cluster to use a different load balancer, the Ingress Operator removes values from these parameters so that the CR reports a more accurate and less confusing message status. (OCPBUGS-38217) -
Previously, a duplicate feature gate,
ExternalRouteCertificate
, was added to theFeatureGate
CR. With this release,ExternalRouteCertificate
is removed because a OpenShift Container Platform cluster does not use this feature gate. (OCPBUGS-36479) -
Previously, after a user created a route, the user needed both
create
andupdate
permissions on theroutes/custom-host
sub-resource to edit the.spec.tls.externalCertificate
field of a route. With this release, this permission requirement has been fixed, so that a user only needs thecreate
permission to edit the.spec.tls.externalCertificate
field of a route. Theupdate
permission is now marked as an optional permission. (OCPBUGS-34373)
Node
-
Previously, the
cadvisor
code that collected and reported container network metrics contained a bug that caused inaccurate results. With this release, the container network metrics are correctly reported. (OCPBUGS-38515)
Node Tuning Operator (NTO)
-
Previously, CPU masks for interrupt and network handling CPU affinity were computed incorrectly on machines with more than 256 CPUs. This issue prevented proper CPU isolation and caused
systemd
unit failures during internal node configuration. This fix ensures accurate CPU affinity calculations, enabling correct CPU isolation on machines with more than 256 CPUs. (OCPBUGS-36431) -
Previously, entering an invalid value in any
cpuset
field underspec.cpu
in thePerformanceProfile
resource caused the webhook validation to crash. With this release, improved error handling for thePerformanceProfile
validation webhook ensures that invalid values for these fields return an informative error. (OCPBUGS-45616) - Previously, users could enter an invalid string for any CPU set in the performance profile, resulting in a broken cluster. With this release, the fix ensures that only valid strings can be entered, eliminating the risk of cluster breakage. (OCPBUGS-47678)
-
Previously, configuring the Node Tuning Operator (NTO) using
PerformanceProfiles
created theocp-tuned-one-shot
systemd
service, which ran before kubelet and blocked its execution. Thesystemd
service invoked Podman, which used the NTO image. When the NTO image was not present, Podman tried to fetch the image. With this release, support for cluster-wide proxy environment variables defined in/etc/mco/proxy.env
is added. This support allows Podman to pull the NTO image in environments that need to usehttp(s)
proxy for out-of-cluster connections. (OCPBUGS-39005)
Observability
- Previously, a namespace was passed to a full cluster query on the alerts graph, and this caused the tenancy API path to be used. The API lacked permissions to retrieve data so no data was shown on the alerts graph. With this release, the namespace is no longer passed to a full cluster query for an alert graph. A non-tenancy API path is now used because this API has the correct permissions to retrieve data. Data is not available on an alert graph. (OCPBUGS-46371)
- Previously, bounds were based on the first bar in a bar chart. If a bar was larger in size than the first bar, the bar would extend beyond the bar chart boundary. With this release, the bound for a bar chart is based on the largest bar, so no bars extend outside the boundary of a bar chart. (OCPBUGS-46059)
-
Previously, a Red Hat Advanced Cluster Management (RHACM) Alerting UI refactor update caused an
isEmpty
check to go missing on the ObserveMetrics menu. The missing check inverted the behavior of the Show all Series and Hide all Series states. This release readds isEmpty
check so that Show all Series is now visible when series are hidden and Hide all Series is now visible when the series are shown. (OCPBUGS-46047) -
Previously, on the Observe
Alerting Silences tab, the DateTime
component changed the ordering of an event and its value. Because of this issue, you could not edit theuntil
parameter for a silent alert in either the Developer or the Administrator perspective. With this release, a fix means to theDateTime
component means that you can now edit theuntil
parameter for a silent alert. (OCPBUGS-46021) -
Previously, when using the Developer perspective with custom editors, clicking the
n
key caused the Namespace menu unexpectedly opened. The issue happened because the keyboard shortcut did not account for custom editors. With this release, the Namespace menu accounts for custom editors and does not open if you type then
key. (OCPBUGS-38775) -
Previously, on the Observe
Alerting Silences tab, the creator
field was not autopopulated and was not designated as mandatory. This issue happened when the API made the field empty from OpenShift Container Platform 4.15 and onwards. With this update, the field is marked as mandatory and populated with the current user for correct validation. (OCPBUGS-35048)
oc-mirror
-
Previously, when using
oc-mirror --v2 delete --generate
command, the contents of theworking-dir/cluster-resources
directory were cleared. With this fix, theworking-dir/cluster-resources
directory is not cleaned when the delete feature is used. (OCPBUGS-48430) -
Previously, release images were signed using a
SHA-1
key. On RHEL 9 FIPS STIG-compliant machines, verification of release signatures using the oldSHA-1
key failed due to security restrictions on weak keys. With this release, release images are signed using a newSHA-256
trusted key so that the release signatures no longer fail. (OCPBUGS-48314)
-
Previously, when using the
--force-cache-delete
flag to delete images from a remote registry, the deletion process did not work as expected. With this update, the issue has been resolved, ensuring that images are deleted properly when the flag is used. (OCPBUGS-47690) - Previously, oc-mirror plugin v2 could not delete the graph image when the mirroring uses a partially disconnected mirroring workflow (mirror-to-mirror). With this update, graph images can now be deleted regardless of the mirroring workflow used. (OCPBUGS-46145)
-
Previously, if the same image was used by multiple OpenShift Container Platform release components, oc-mirror plugin v2 attempted to delete the image multiple times, but failed after the first attempt. This issue has been resolved by ensuring oc-mirror plugin v2 generates a list of unique images during the delete
--generate
phase. (OCPBUGS-45299) -
Previously,
oci
catalogs on disk were not mirrored correctly in the oc-mirror plugin v2. With this update,oci
catalogs are now successfully mirrored. (OCPBUGS-44225) -
Previously, if you reran the
oc-mirror
command, the rebuild of theoci
catalog failed and an error was generated. With this release, if you rerun theoc-mirror
command, the wrokspace file is deleted so that the failed catalog issue does not happen. (OCPBUGS-45171) -
Previously, if you ran the
oc adm node-image create
command on the first attempt, sometimes animage can’t be pulled
error message was generated. With this release, a retry mechanism addresses temporary failures when pulling the image from the release payload. (OCPBUGS-44388) -
Previously, duplicate entries could appear in the signature
ConfigMap YAML
andJSON
files created in theclusterresource
object, leading to issues when applying them to the cluster. This update ensures that the generated files do not contain duplicates. (OCPBUGS-42428) -
Previously, the release signature
ConfigMap
for oc-mirror plugin v2 was incorrectly stored in an archived TAR file instead of in thecluster-resources
folder. This causedmirror2disk
to fail. With this release. the release signatureConfigMap
for oc-mirror plugin v2 that is in JSON format or YAML format, compatible with oc-mirror plugin v1, now get stored in thecluster-resources
folder. (OCPBUGS-38343) and (OCPBUGS-38233) -
Previously, using an invalid log-level flag caused oc-mirror plugin v2 to panic. This update ensures that the oc-mirror plugin v2 handles invalid log levels gracefully. Additionally, the
loglevel
flag has been renamed tolog-level
to align with tools like Podman for the convenience of the user. (OCPBUGS-37740)
OpenShift CLI (oc)
-
Previously, the
oc adm node-image create --pxe generated
command did not create only the Preboot Execution Environment (PXE) artifacts. Instead, the command created the PXE artifacts with other artifacts from anode-joiner
pod and stored them all in the wrong subdirectory. Additionally, the PXE artifacts were incorrectly prefixed withagent
instead ofnode
. With this release, generated PXE artifacts are stored in the correct directory and receive the correct prefix. (OCPBUGS-46449) -
Previously, requests to the
deploymentconfig/scale
subresource would fail when there was an admission webhook matching the request. With this release, the issue is resolved and requests to thedeploymentconfig/scale
subresource will succeed. (OCPBUGS-41136)
Operator Lifecycle Manager (OLM)
-
Previously, concurrent reconciliation of the same namespace in Operator Lifecycle Manager (OLM) Classic led to
ConstraintsNotSatisfiable
errors on subscriptions. This update resolves the issue. (OCPBUGS-48660) - Previously, excessive catalog source snapshots caused severe performance regressions. This update fixes the issue. (OCPBUGS-48644)
-
Previously, when the kubelet terminated catalog registry pods with the
TerminationByKubelet
message, the registry pods were not recreated by the catalog Operator. This update fixes the issue. (OCPBUGS-46474) - Previously, OLM (Classic) failed to upgrade Operator cluster service versions (CSVs) due to a TLS validation error. This update fixes the issue. (OCPBUGS-43581)
- Previously, service account tokens for Operator groups failed to generate automatically in Operator Lifecycle Manager (OLM) Classic. This update fixes the issue. (OCPBUGS-42360)
- Previously when Operator Lifecycle Manager (OLM) v1 validated custom resource definition (CRD) upgrades, the message output when detecting changed default values was rendered in bytes instead of human-readable language. With this update, related messages are now updated to show human-readable values. (OCPBUGS-41726)
-
Previously, the status update function did not return an error when a connection error occurred in the Catalog Operator. As a result, the Operator might crash because the IP address returned a
nil
status. This update resolves the issue so that an error message is returned and the Operator no longer crashes. (OCPBUGS-37637) - Previously, catalog source registry pods did not recover from cluster node failures. This update fixes the issue. (OCPBUGS-36661)
- Previously, Operators with many custom resources (CRs) exceeded API server timeouts. As a result, the install plan for the Operator got stuck in a pending state. This update fixes the issue by adding a page view for list CRs deployed on the cluster. (OCPBUGS-35358)
Performance Addon Operator
Previously, the Performance Profile Creator (PPC) failed to build a performance profile for compute nodes that had different core ID numbering (core per socket) for their logical processors and the nodes existed under the same node pool. For example, the PPC failed in a situation for two compute nodes that have logical processors
2
and18
, where one node groups them as core ID2
and the other node groups them as core ID9
.With this release, PPC no longer fails to create the performance profile because PPC can now build a performance profile for a cluster that has compute nodes that each have different core ID numbering for their logical processors. The PPC now outputs a warning message that indicates to use the generated performance profile with caution, because different core ID numbering might impact system optimization and isolated management of tasks. (OCPBUGS-45903)
Previously, if you specified a long string of isolated CPUs in a performance profile, such as
0,1,2,…,512
, thetuned
, Machine Config Operator andrpm-ostree
components failed to process the string as expected. As a consequence, after you applied the performance profile, the expected kernel arguments were missing. The system failed silently with no reported errors. With this release, the string for isolated CPUs in a performance profile is converted to sequential ranges, such as0-512
. As a result, the kernel arguments are applied as expected in most scenarios. (OCPBUGS-45472)NoteThe issue might still occur with some combinations of input for isolated CPUs in a performance profile, such as a long list of odd numbers
1,3,5,…,511
.
Red Hat Enterprise Linux CoreOS (RHCOS)
-
Previously, the
kdump
initramfs would stop responding when trying to open a local encrypted disk. This occurred even when thekdump
destination was a remote machine that did not need access to the local disk. With this release, the issue is fixed and thekdump
initramfs successfully opens a local encrypted disk. (OCPBUGS-43040) -
Previously, explicitly disabling FIPS mode with
fips=0
caused some systemd services, that assume FIPS mode was requested, to run and consequently fail. This issue resulted in RHCOS failing to boot. With this release, the relevant systemd services now only run if FIPS mode is enabled by specifyingfips=1
. As a result, RHCOS now correctly boots without FIPS mode enabled whenfips=0
is specified. (OCPBUGS-39536)
Scalability and performance
-
Previously, you could configure the NUMA Resources Operator to map a
nodeGroup
to more than oneMachineConfigPool
. This implementation is contrary to the intended design of the Operator, which assumed a one-to-one mapping between anodeGroup
and aMachineConfigPool
. With this release, if anodeGroup
maps to more than oneMachineConfigPool
, the Operator accepts the configuration, but the Operator state moves toDegraded
. To retain the previous behavior, you can apply theconfig.node.openshift-kni.io/multiple-pools-per-tree: enabled
annotation to the NUMA Resources Operator. However, the ability to assign anodeGroup
to more than oneMachineConfigPool
will be removed in a future release. (OCPBUGS-42523)
Storage
- Previously, Portworx plugin Container Storage Interface (CSI) migration failed without the inclusion of an upstream patch. With this release, the Portworx plugin CSI translation now copies the secret name and namespace to Kubernetes version to 1.31 so that an upstream patch is not required. (OCPBUGS-49437)
-
Previously, the VSphere Problem Detector Operator waited up to 24 hours to reflect a change in the
clustercsidrivers.managementState
parameter fromManaged
toRemoved
for a VMware vSphere cluster. With this release, the VSphere Problem Detector Operator now reflects this state change in about 1 hour. (OCPBUGS-39358) - Previously, the Azure File Driver attempted to reuse existing storage accounts. With this release, the Azure File Driver creates storage accounts during dynamic provisioning. This means that updated clusters using newly-created Persistent Volumes (PVs) also use a new storage account. PVs that were previously provisioned continue using the same storage account used before the cluster update. (OCPBUGS-38922)
-
Previously, the configuration loader logged YAML
unmarshall
errors when theINI
succeeded. With this release, theunmarshall
errors are no longer logged when theINI
succeeds. (OCPBUGS-38368) - Previously, the Storage Operator counted an incorrect number of control plane nodes that existed in a cluster. This count is needed for the Operator to determine the number of replicas for controllers. With this release, the Storage Operator now counts the correct number of control plane nodes, leading to a more accurate count of replica controllers. (OCPBUGS-36233)
-
Previously, the
manila-csi-driver
and node registrar pods had missing health checks because of a configuration issue. With this release, the health checks are now added to both of these resources. (OCPBUGS-29240)
1.7. Technology Preview features status
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
Technology Preview Features Support Scope
In the following tables, features are marked with the following statuses:
- Not Available
- Technology Preview
- General Availability
- Deprecated
- Removed
Authentication and authorization Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Pod security admission restricted enforcement | Technology Preview | Technology Preview | Technology Preview |
Edge computing Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Accelerated provisioning of GitOps ZTP | Technology Preview | Technology Preview | Technology Preview |
Enabling disk encryption with TPM and PCR protection | Not Available | Technology Preview | Technology Preview |
Installation Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Adding kernel modules to nodes with kvc | Technology Preview | Technology Preview | Technology Preview |
Enabling NIC partitioning for SR-IOV devices | Technology Preview | General Availability | General Availability |
User-defined labels and tags for Google Cloud Platform (GCP) | Technology Preview | General Availability | General Availability |
Installing a cluster on Alibaba Cloud by using Assisted Installer | Technology Preview | Technology Preview | Technology Preview |
Mount shared entitlements in BuildConfigs in RHEL | Technology Preview | Technology Preview | Technology Preview |
OpenShift Container Platform on Oracle® Cloud Infrastructure (OCI) | General Availability | General Availability | General Availability |
Selectable Cluster Inventory | Technology Preview | Technology Preview | Technology Preview |
Installing a cluster on GCP using the Cluster API implementation | Technology Preview | General Availability | General Availability |
OpenShift Container Platform on Oracle Compute Cloud@Customer (C3) | Not Available | Not Available | General Availability |
OpenShift Container Platform on Oracle Private Cloud Appliance (PCA) | Not Available | Not Available | General Availability |
Installing a cluster on VMware vSphere with multiple network interface controllers | Not Available | Not Available | Technology Preview |
Machine Config Operator Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Improved MCO state reporting ( | Technology Preview | Technology Preview | Technology Preview |
On-cluster RHCOS image layering | Technology Preview | Technology Preview | Technology Preview |
Node disruption policies | Technology Preview | General Availability | General Availability |
Updating boot images for GCP clusters | Technology Preview | General Availability | General Availability |
Updating boot images for AWS clusters | Technology Preview | Technology Preview | General Availability |
Machine management Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Managing machines with the Cluster API for Amazon Web Services | Technology Preview | Technology Preview | Technology Preview |
Managing machines with the Cluster API for Google Cloud Platform | Technology Preview | Technology Preview | Technology Preview |
Managing machines with the Cluster API for VMware vSphere | Technology Preview | Technology Preview | Technology Preview |
Cloud controller manager for IBM Power® Virtual Server | Technology Preview | Technology Preview | Technology Preview |
Defining a vSphere failure domain for a control plane machine set | General Availability | General Availability | General Availability |
Cloud controller manager for Alibaba Cloud | Removed | Removed | Removed |
Adding multiple subnets to an existing VMware vSphere cluster by using compute machine sets | Not Available | Not Available | Technology Preview |
Monitoring Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Metrics Collection Profiles | Technology Preview | Technology Preview | Technology Preview |
Web console Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Red Hat OpenShift Lightspeed in the OpenShift Container Platform web console | Technology Preview | Technology Preview | Technology Peview |
Multi-Architecture Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
| Technology Preview | Technology Preview | Technology Preview |
| Technology Preview | Technology Preview | Technology Preview |
| Technology Preview | Technology Preview | Technology Preview |
Multiarch Tuning Operator | General Availability | General Availability | General Availability |
Support for configuring the image stream import mode behavior | Not Available | Not Available | Technology Preview |
Networking Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
eBPF manager Operator | N/A | Technology Preview | Technology Preview |
Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses | Technology Preview | Technology Preview | Technology Preview |
Updating the interface-specific safe sysctls list | Technology Preview | Technology Preview | Technology Preview |
Egress service custom resource | Technology Preview | Technology Preview | Technology Preview |
VRF specification in | Technology Preview | Technology Preview | Technology Preview |
VRF specification in | Technology Preview | Technology Preview | Technology Preview |
Host network settings for SR-IOV VFs | Technology Preview | General Availability | General Availability |
Integration of MetalLB and FRR-K8s | Technology Preview | General Availability | General Availability |
Automatic leap seconds handling for PTP grandmaster clocks | Not Available | General Availability | General Availability |
PTP events REST API v2 | Not Available | General Availability | General Availability |
Customized | Technology Preview | Technology Preview | General Availability |
Live migration to OVN-Kubernetes from OpenShift SDN | Not Available | General Availability | Not Available |
User defined network segmentation | Not Available | Technology Preview | General Availablity |
Dynamic configuration manager | Not Available | Not Available | Technology Preview |
SR-IOV Network Operator support for Intel C741 Emmitsburg Chipset | Not Available | Not Available | Technology Preview |
Node Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
| Technology Preview | Technology Preview | Technology Preview |
sigstore support | Not Available | Technology Preview | Technology Preview |
OpenShift CLI (oc) Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
oc-mirror plugin v2 | Technology Preview | Technology Preview | General Availability |
oc-mirror plugin v2 enclave support | Technology Preview | Technology Preview | General Availability |
oc-mirror plugin v2 delete functionality | Technology Preview | Technology Preview | General Availability |
Extensions Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Operator Lifecycle Manager (OLM) v1 | Technology Preview | Technology Preview | General Availability |
OLM v1 runtime validation of container images using sigstore signatures | Not Available | Not Available | Technology Preview |
Operator lifecycle and development Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Operator Lifecycle Manager (OLM) v1 | Technology Preview | Technology Preview | General Availability |
Scaffolding tools for Hybrid Helm-based Operator projects | Deprecated | Deprecated | Removed |
Scaffolding tools for Java-based Operator projects | Deprecated | Deprecated | Removed |
Red Hat OpenStack Platform (RHOSP) Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
RHOSP integration into the Cluster CAPI Operator | Technology Preview | Technology Preview | Technology Preview |
Control Plane with | Technology Preview | General Availability | General Availability |
Scalability and performance Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
factory-precaching-cli tool | Technology Preview | Technology Preview | Technology Preview |
Hyperthreading-aware CPU manager policy | Technology Preview | Technology Preview | Technology Preview |
Mount namespace encapsulation | Technology Preview | Technology Preview | Technology Preview |
Node Observability Operator | Technology Preview | Technology Preview | Technology Preview |
Increasing the etcd database size | Technology Preview | Technology Preview | Technology Preview |
Using RHACM | Technology Preview | Technology Preview | Technology Preview |
Pinned Image Sets | Technology Preview | Technology Preview | Technology Preview |
Storage Technology Preview features
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
AWS EFS storage CSI usage metrics | Not Available | General Availability | General Availability |
Automatic device discovery and provisioning with Local Storage Operator | Technology Preview | Technology Preview | Technology Preview |
Azure File CSI snapshot support | Not Available | Technology Preview | Technology Preview |
Read Write Once Pod access mode | General Availability | General Availability | General Availability |
Shared Resources CSI Driver in OpenShift Builds | Technology Preview | Technology Preview | Technology Preview |
Secrets Store CSI Driver Operator | Technology Preview | Technology Preview | General Availability |
CIFS/SMB CSI Driver Operator | Technology Preview | Technology Preview | General Availability |
VMware vSphere multiple vCenter support | Not Available | Technology Preview | General Availability |
Disabling/enabling storage on vSphere | Not Available | Technology Preview | Technology Preview |
RWX/RWO SELinux Mount | Not Available | Developer Preview | Developer Preview |
Migrating CNS Volumes Between Datastores | Not Available | Developer Preview | Developer Preview |
CSI volume group snapshots | Not Available | Not Available | Technology Preview |
GCP PD supports C3/N4 instance types and hyperdisk-balanced disks | Not Available | Not Available | General Availability |
GCP Filestore supports Workload Identity | Not Available | General Availability | General Availability |
OpenStack Manila support for CSI resize | Not Available | Not Available | General Availability |
1.8. Known issues
-
Previously, when you attempted to set the policy for a Google Cloud Platform (GCP) service account, the API reported a
400: Bad Request
validation error. When you create a service account, it might take up to 60 seconds for the account to become active, and this causes the validation error. If this error occurs, create a service account with a true exponential backoff that lasts at least 60 seconds. (OCPBUGS-48187) -
An installation can succeed when installing a cluster on a Google Cloud Platform shared virtual private network (VPC) using the minimum permissions and without specifying the`controlPlane.platform.gcp.serviceAccount` in the
install-config.yaml
file. Firewall rules in Kubernetes (K8s) are created in the shared VPC, but destroying the cluster will not delete these firewall rules in K8s because the host project lacks the permissions. (OCPBUGS-38689) -
oc-mirror plugin v2 currently returns an exit status of
0
, meaning "success", even when mirroring errors occur. As a result, do not rely on the exit status in automated workflows. Until this issue is resolved, manually check themirroring_errors_XXX_XXX.txt
file generated byoc-mirror
for errors. (OCPBUGS-49880) -
The DNF package manager included in Red Hat Enterprise Linux CoreOS (RHCOS) images cannot be used at runtime, because DNF relies on additional packages to access entitled nodes in a cluster that are under a Red Hat subscription. As a workaround, use the
rpm-ostree
command instead. (OCPBUGS-35247) -
A regression in the behaviour of
libreswan
caused some nodes with IPsec enabled to lose communication with pods on other nodes in the same cluster. To resolve this issue, consider disabling IPsec for your cluster. (OCPBUGS-43713) - There is a known issue in OpenShift Container Platform version 4.18 that prevents configuring multiple subnets in the failure domain of a Nutanix cluster during installation. There is no workaround for this issue. (OCPBUGS-49885)
The following known issues exist for configuring multiple subnets for an existing Nutanix cluster by using a control plane machine set:
-
Adding subnets above the existing subnet in the
subnets
stanza causes a control plane node to become stuck in theDeleting
state. As a workaround, only add subnets below the existing subnet in thesubnets
stanza. - Sometimes, after adding a subnet, the updated control plane machines appear in the Nutanix console but the OpenShift Container Platform cluster is unreachable. There is no workaround for this issue.
These issues occur on clusters that use a control plane machine set to configure subnets regardless of whether subnets are specified in a failure domain or the provider specification. (OCPBUGS-50904)
-
Adding subnets above the existing subnet in the
-
There is a known issue with RHEL 8 worker nodes that use
cgroupv1
Linux Control Groups (cgroup). The following is an example of the error message displayed for impacted nodes:UDN are not supported on the node ip-10-0-51-120.us-east-2.compute.internal as it uses cgroup v1.
As a workaround, users should migrate worker nodes fromcgroupv1
tocgroupv2
. (OCPBUGS-49933) -
The current PTP grandmaster clock (T-GM) implementation has a single National Marine Electronics Association (NMEA) sentence generator sourced from the GNSS without a backup NMEA sentence generator. If NMEA sentences are lost before reaching the e810 NIC, the T-GM cannot synchronize the devices in the network synchronization chain and the PTP Operator reports an error. A proposed fix is to report a
FREERUN
event when the NMEA string is lost. Until this limitation is addressed, T-GM does not support PTP clock holdover state. (OCPBUGS-19838) -
There is a known issue with a Layer 2 network topology on clusters running on Google Cloud Platform (GCP). At this time, the egress IP addresses being used in the Layer 2 network that is created by a
UserDefinedNetwork
(UDN) resource are using the wrong source IP address. Consequentially, UDN is not supported on Layer 2 on GCP. Currently, there is no fix for this issue. (OCPBUGS-48301) - There is a known issue with user-defined networks (UDN) that causes OVN-Kubernetes to delete any routing table ID equal or higher to 1000 that it does not manage. Consequently, any Virtual Routing and Forwarding (VRF) instance created outside OVN-Kubernetes is deleted. This issue impacts users who have created user-defined VRFs with a table ID greater than or equal to 1000. As a workaround, users must change their VRFs to a table ID lower than 1000 as these are reserved for OpenShift Container Platform. (OCPBUGS-50855)
If you attempted to log in to a OpenShift Container Platform 4.17 server by using the OpenShift CLI (
oc
) that you installed as part of the OpenShift Container Platform 4.18, you would see the following warning message in your terminal:Warning: unknown field "metadata" You don't have any projects. You can try to create a new project, by running oc new-project <projectname>
This warning message is a known issue but does not indicate any functionality issues with OpenShift Container Platform. You can safely ignore the warning message and continue to use OpenShift Container Platform as intended. (OCPBUGS-44833)
There is a known issue in OpenShift Container Platform 4.18 which causes the cluster’s masquerade subnet to be set to
169.254.169.0/29
if theovnkube-node
daemon set is deleted. When the masquerade subnet is set to169.254.169.0/29
,UserDefinedNetwork
custom resources (CRs) cannot be created.Note-
If your masquerade subnet has been configured at Day 2 by making changes to the
network.operator
CR, it will not be reverted to169.254.169.0/29
. -
If a cluster has been upgraded from OpenShift Container Platform 4.16, the masquerade subnet remains
169.254.169.0/29
for backward compatibility. The masquerade subnet should be changed to a subnet with more IPs, for example,169.254.0.0/17
, to use the user-defined networks feature.
This known issue occurs after performing one of the following actions:
Action Consequence You have restarted the
ovnkube-node
DaemonSet
object.The masquerade subnet is set to
169.254.169.0/29
, which does not supportUserDefinedNetwork
CRs.You have deleted the
ovnkube-node
DaemonSet
object.The masquerade subnet is set to
169.254.169.0/29
, which does not supportUserDefinedNetwork
CRs. Additionally,ovnkube-node
pods crash and remain in aCrashLoopBackOff
state.As a temporary workaround, you can delete the
UserDefinedNetwork
CR and then restart allovnkube-node
pods by running the following command:$ oc delete pod -l app=ovnkube-node -n openshift-ovn-kubernetes
The
ovnkube-node
pods automatically restart, which re-stabilizes the cluster. Then, you can set the masquerade subnet to a larger IP address, for example,169.254.0.0/17
for IPv4. As a result,NetworkAttachmentDefinition
orUserDefinedNetwork
CRs can be created.ImportantDo not delete the
ovnkube-node
DaemonSet
object when deletingovnkube-node
pods. Doing so sets the masquerade subnet to169.254.169.0/29
.For more information, see Configuring the OVN-Kubernetes masquerade subnet as a Day 2 operation.
-
If your masquerade subnet has been configured at Day 2 by making changes to the
-
Adding or removing nodes from the cluster can cause ownership contention over the node status. This can cause new nodes to take an extended period of time to appear. As a workaround, you can restart the
kube-apiserver-operator
pod in theopenshift-kube-apiserver-operator
namespace to expedite the process. (OCPBUGS-50587) - For dual-stack networking clusters that run on RHOSP, when a Virtual IP (VIP) that is attached to a Floating IP (FIP) moves between master nodes, the association between VIP and FIP might stop working if the new master is on a different compute node. This issue occurs because OVN assumes that both IPv4 and IPv6 addresses on a shared Neutron port belong to the same node. (OCPBUGS-50599)
-
When you run Cloud-native Network Functions (CNF) latency tests on an OpenShift Container Platform cluster, the test can sometimes return results greater than the latency threshold for the test; for example, 20 microseconds for
cyclictest
testing. This results in a test failure. (OCPBUGS-42328) -
There is a known issue when the grandmaster clock (T-GM) transitions to the
Locked
state too soon. This happens before the Digital Phase-Locked Loop (DPLL) completes its transition to theLocked-HO-Acquired
state, and after the Global Navigation Satellite Systems (GNSS) time source is restored. (OCPBUGS-49826)
Due to an issue with Kubernetes, the CPU Manager is unable to return CPU resources from the last pod admitted to a node to the pool of available CPU resources. These resources are allocatable if a subsequent pod is admitted to the node. However, this pod then becomes the last pod, and again, the CPU manager cannot return this pod’s resources to the available pool.
This issue affects CPU load-balancing features, which depend on the CPU Manager releasing CPUs to the available pool. Consequently, non-guaranteed pods might run with a reduced number of CPUs. As a workaround, schedule a pod with a
best-effort
CPU Manager policy on the affected node. This pod will be the last admitted pod and this ensures the resources will be correctly released to the available pool. (OCPBUGS-46428)- When a pod uses the CNI plugin for DHCP address assignment in conjunction with other CNI plugins, the network interface for the pod might be unexpectedly deleted. As a result, when the DHCP lease for the pod expires, the DHCP proxy enters a loop when trying to re-create a new lease, leading to the node becoming unresponsive. There is currently no workaround. (OCPBUGS-45272)
- When using PXE boot to add a worker node to an on-premise cluster, sometimes the host fails to reboot from the disk properly, preventing the installation from completing. As a workaround, you must manually reboot the failed host from the disk. (OCPBUGS-45116)
- The GCP PD CSI driver does not support hyperdisk-balanced volumes with RWX mode. Attempting to provision hyperdisk-balanced volumes with RWX mode using the GCP PD CSI driver produces errors and does not mount the volumes with the desired access mode. (OCPBUGS-44769)
- Currently, a GCP PD cluster with c3-standard-2, c3-standard-4, n4-standard-2, and n4-standard-4 nodes can erroneously exceed the maximum attachable disk number, which should be 16. This issue may prevent you from successfully creating or attaching volumes to your pods. (OCPBUGS-39258)
1.9. Asynchronous errata updates
Security, bug fix, and enhancement updates for OpenShift Container Platform 4.18 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.18 errata is available on the Red Hat Customer Portal. See the OpenShift Container Platform Life Cycle for more information about asynchronous errata.
Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate.
This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.18. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.18.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow.
For any OpenShift Container Platform release, always review the instructions on updating your cluster properly.
1.9.1. RHSA-2024:6122 - OpenShift Container Platform 4.18.1 image release, bug fix, and security update advisory
Issued: 25 February 2025
OpenShift Container Platform release 4.18.1, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:6122 advisory. The RPM packages that are included in the update are provided by the RHEA-2024:6126 advisory.
Space precluded documenting all of the container images for this release in the advisory.
You can view the container images in this release by running the following command:
$ oc adm release info 4.18.1 --pullspecs
1.9.1.1. Updating
To update an OpenShift Container Platform 4.17 cluster to this latest release, see Updating a cluster using the CLI.