Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 1. Hosted control planes release notes
Release notes contain information about new and deprecated features, changes, and known issues.
1.1. Hosted control planes release notes for OpenShift Container Platform 4.20 Copier lienLien copié sur presse-papiers!
With this release, hosted control planes for OpenShift Container Platform 4.20 is available. Hosted control planes for OpenShift Container Platform 4.20 supports multicluster engine for Kubernetes Operator version 2.10.
1.1.1. New features and enhancements Copier lienLien copié sur presse-papiers!
1.1.1.1. Scaling up workloads in a hosted cluster Copier lienLien copié sur presse-papiers!
You can now only scale up workloads by using the ScaleUpOnly behavior, without scaling down the workloads in your hosted cluster. For more information, see Scaling up workloads in a hosted cluster.
1.1.1.2. Scaling up and down workloads in a hosted cluster Copier lienLien copié sur presse-papiers!
You can now scale up and down the workloads by using the ScaleUpAndScaleDown behavior in your hosted cluster. For more information, see Scaling up and down workloads in a hosted cluster.
1.1.1.3. Balancing ignored labels in a hosted cluster Copier lienLien copié sur presse-papiers!
After scaling up your node pools, you can now set balancingIgnoredLabels to evenly distribute the machines across node pools. For more information, see Balancing ignored labels in a hosted cluster.
1.1.1.4. Setting the priority expander in a hosted cluster Copier lienLien copié sur presse-papiers!
You can now create high priority machines before low priority machines by configuring the priority expander in your hosted cluster. For more information, see Setting the priority expander in a hosted cluster.
1.1.1.5. Hosted control planes on IBM Z in a disconnected environment is Generally Available Copier lienLien copié sur presse-papiers!
As of this release, hosted control planes on IBM Z in a disconnected environment is a General Availablilty feature. For more information, see Deploying hosted control planes on IBM Z in a disconnected environment.
1.1.2. Bug fixes Copier lienLien copié sur presse-papiers!
-
Before this update, the SAN validation for custom certificates in
hc.spec.configuration.apiServer.servingCerts.namedCertificatesdid not properly handle wildcard DNS patterns, such as*.example.com. As a consequence, the wildcard DNS patterns in custom certificates could conflict with internal Kubernetes API server certificate SANs without being detected, leading to certificate validation failures and potential deployment issues. This release provides enhanced DNS SAN conflict detection to include RFC-compliant wildcard support, implementing bidirectional conflict validation that properly handles wildcard patterns such as*.example.commatchingsub.example.com. As a result, wildcard DNS patterns are now properly validated, preventing certificate conflicts and ensuring more reliable hosted cluster deployments with wildcard certificate support. (OCPBUGS-60381) -
Before this update, the Azure cloud provider did not set the default ping target,
HTTP:10256/healthz, for the Azure load balancer. Instead, services of theLoadBalancertype that ran on Azure had a ping target ofTCP:30810. As a consequence, the health probes for cluster-wide services were non-functional, and during upgrades, they experienced downtime. With this release, theClusterServiceLoadBalancerHealthProbeModeproperty of the cloud configuration is set toshared. As a result, load balancers in Azure have the correct health check ping target,HTTP:10256/healthz, which points tokube-proxyhealth endpoints that run on nodes. (OCPBUGS-58031) -
Before this update, the HyperShift Operator failed to clear the
user-ca-bundleconfig map after the removal of theadditionalTrustBundleparameter from theHostedClusterresource. As a consequence, theuser-ca-bundleconfig map was not updated, resulting in failure to generate ignition payloads. With this release, the HyperShift Operator actively removes theuser-ca-bundleconfig map from the control plane namespace when it is removed from theHostedClusterresource. As a result, theuser-ca-bundleconfig map is now correctly cleared, enabling the generation of ignition payloads. (OCPBUGS-57336) -
Before this update, if you tried to create a hosted cluster on AWS when the Kubernetes API server service publishing strategy was
LoadBalancerwithPublicAndPrivateendpoint access, a private router admitted the OAuth route even though the External DNS Operator did not register a DNS record. As a consequence, the private router did not properly resolve the route URL and the OAuth server was inaccessible. The Console Cluster Operator also failed to start, and the hosted cluster installation failed. With this release, a private router admits the OAuth route only when the external DNS is defined. Otherwise, the router admits the route in the management cluster. As a result, the OAuth route is accessible, the Console Cluster Operator properly starts, and the hosted cluster installation succeeds. (OCPBUGS-56914) -
Before this release, when an IDMS or ICSP in the management OpenShift cluster defined a source that pointed to registry.redhat.io or registry.redhat.io/redhat, and the mirror registry did not contain the required OLM catalog images, provisioning for the
HostedClusterresource stalled due to unauthorized image pulls. As a consequence, theHostedClusterresource was not deployed, and it remained blocked, where it could not pull essential catalog images from the mirrored registry. With this release, if a required image cannot be pulled due to authorization errors, the provisioning now explicitly fails. The logic for registry override is improved to allow matches on the root of the registry, such as registry.redhat.io, for OLM CatalogSource image resolution. A fallback mechanism is also introduced to use the originalImageReferenceif the registry override does not yield a working image. As a result, theHostedClusterresource can be deployed successfully, even in scenarios where the mirror registry lacks the required OLM catalog images, as the system correctly falls back to pulling from the original source when appropriate. (OCPBUGS-56492) -
Before this update, the AWS Cloud Provider did not set the default ping target,
HTTP:10256/healthz, for the AWS load balancer. For services of theLoadBalancertype that run on AWS, the load balancer object created in AWS had a ping target ofTCP:32518. As a consequence, the health probes for cluster-wide services were non-functional, and during upgrades, those services were down. With this release, theClusterServiceLoadBalancerHealthProbeModeproperty of the cloud configuration is set toShared. This cloud configuration is passed to the AWS Cloud Provider. As a result, the load balancers in AWS have the correct health check ping target,HTTP:10256/healthz, which points to thekube-proxyhealth endpoints that are running on nodes. (OCPBUGS-56011) -
Before this update, when you disabled the image registry capability by using the
--disable-cluster-capabilitiesoption, hosted control planes still required you to configure a managed identity for the image registry. In this release, when the image registry is disabled, the image registry managed identity configuration is optional. (OCPBUGS-55892) -
Before this update, the
ImageDigestMirrorSet(IDMS) andImageContentSourcePolicy(ICSP) resources from the management cluster were processed without considering that someone might specify only the root registry name as a mirror or source for image replacement. As a consequence, the IDMS and ICSP entries that used only the root registry name did not work as expected. In this release, the mirror replacement logic now correctly handles cases where only the root registry name is provided. As a result, the issue no longer occurs, and root registry mirror replacements are now supported. (OCPBUGS-54483) -
Before this update, hosted control planes did not correctly persist registry metadata and release image provider caches in the
HostedClusterresource. As a consequence, caches for release and image metadata reset onHostedClustercontroller reconciliation. This release introduces a common registry provider which is used by theHostedClusterresource to fix cache loss. This reduces the number of image pulls and network traffic, thus improving overall performance. (OCPBUGS-53259) -
Before this update, when you configured an OIDC provider for a
HostedClusterresource with an OIDC client that did not specify a client secret, the system automatically generated a default secret name. As a consequence, you could not configure OIDC public clients, which are not supposed to use secrets. This release fixes the issue. If no client secret is provided, no default secret name is generated, enabling proper support for public clients. (OCPBUGS-58149) - Before this update, multiple mirror images caused a hosted control plane payload error due to failed image lookup. As a consequence, users could not create hosted clusters. With this release, the hosted control plane payload now supports multiple mirrors, avoiding errors when a primary mirror is unavailable. As a result, users can create hosted clusters. (OCPBUGS-54720)
Before this update, when a hosted cluster was upgraded to multiple versions over time, the version history in the
HostedClusterresource sometimes exceeded 10 entries. However, the API had a strict validation limit of 10 items maximum for the version history field. As a consequence, users could not edit or update theirHostedClusterresources when the version history exceeded 10 entries. Operations such as adding annotations (for example, for cluster size overrides) or performing maintenance tasks like resizing request serving nodes failed with a validation error: "status.version.history: Too many: 11: must have at most 10 items". This error prevented ROSA SREs from performing critical maintenance operations that might impact customer API access.With this release, the maximum items validation constraint has been removed from the version history field in the
HostedClusterAPI, allowing the history to grow beyond 10 entries without triggering validation errors. As a result,HostedClusterresources can now be edited and updated regardless of how many entries exist in the version history, so that administrators can perform necessary maintenance operations on clusters that have undergone multiple version upgrades. (OCPBUGS-58200)Before this update, following a CLI refactoring, the
MarkPersistentFlagRequiredfunction stopped working correctly. The--nameand--pull-secretflags, which are critical for cluster creation, were marked as required, but the validation was not being enforced. As a consequence, users could run thehypershift create clustercommand without providing the required--nameor--pull-secretflags, and the CLI would not immediately alert them that these required flags were missing. This could lead to misconfigured deployments and confusing error messages later in the process.This release adds an explicit validation in the
RawCreateOptions.Validate()function to check for the presence of the--nameand--pull-secretflags, returning clear error messages when either flag is missing. Additionally, the default "example" value is removed from the name field to ensure proper validation. As a result, when users attempt to create a cluster without the required--nameor--pull-secretflags, they now receive immediate, clear error messages indicating which required flag is missing (for example, "Error: --name is required" or "Error: --pull-secret is required"), preventing misconfigured deployments and improving the user experience. (OCPBUGS-37323)Before this update, a variable shadowing bug in the
GetSupportedOCPVersions()function caused thesupportedVersionsvariable to be incorrectly assigned using:=instead of=, creating a local variable that was immediately discarded rather than updating the intended outer scope variable. As a consequence, when users ran thehypershift versioncommand with the HyperShift Operator deployed, the CLI would either display<unknown>for the Server Version or panic with a "nil pointer dereference" error, preventing users from verifying the deployed HyperShift Operator version.This release corrects the variable assignment from
supportedVersions :=tosupportedVersions =in theGetSupportedOCPVersions()function to properly assign the config map to the outer scope variable, ensuring the supported versions data is correctly populated. As a result, thehypershift versioncommand now correctly displays the Server Version (for example, "Server Version: f001510b35842df352d1ab55d961be3fdc2dae32") when the HyperShift Operator is deployed, so that users can verify the running operator version and supported OpenShift Container Platform versions. (OCPBUGS-57316)- Before this update, the HyperShift Operator validated the Kubernetes API Server subject alternative names (SANs) in all cases. As a consequence, users sometimes experienced invalid API Server SANs during public key infrastructure (PKI) reconciliation. With this release, the Kubernetes API Server SANs are validated only if PKI reconciliation is not disabled. (OCPBUGS-56457)
Before this update, the shared ingress controller did not handle the
HostedCluster.Spec.KubeAPIServerDNSNamefield, so custom kube-apiserver DNS names were not added to the router configuration. As a consequence, traffic destined for the kube-apiserver on a hosted control plane that used a custom DNS name (viaHostedCluster.Spec.KubeAPIServerDNSName) was not routed correctly, preventing theKubeAPIExternalNamefeature from working with platforms that use shared ingress.This release adds handling for
HostedCluster.Spec.KubeAPIServerDNSNamein the shared ingress controller. When a hosted cluster specifies a custom kube-apiserver DNS name, the controller now automatically creates a route that directs traffic to the kube-apiserver service. As a result, traffic destined for custom kube-apiserver DNS names is now correctly routed by the shared ingress controller, enabling theKubeAPIExternalNamefeature to work on platforms that use shared ingress. (OCPBUGS-57790)
1.1.3. Known issues Copier lienLien copié sur presse-papiers!
-
If the annotation and the
ManagedClusterresource name do not match, the multicluster engine for Kubernetes Operator console displays the cluster asPending import. The cluster cannot be used by the multicluster engine Operator. The same issue happens when there is no annotation and theManagedClustername does not match theInfra-IDvalue of theHostedClusterresource. - When you use the multicluster engine for Kubernetes Operator console to add a new node pool to an existing hosted cluster, the same version of OpenShift Container Platform might appear more than once in the list of options. You can select any instance in the list for the version that you want.
When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a
Readystate. You can verify the number of nodes in two ways:- In the console, go to the node pool and verify that it has 0 nodes.
On the command-line interface, run the following commands:
Verify that 0 nodes are in the node pool by running the following command:
oc get nodepool -A
$ oc get nodepool -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that 0 nodes are in the cluster by running the following command:
oc get nodes --kubeconfig
$ oc get nodes --kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that 0 agents are reported as bound to the cluster by running the following command:
oc get agents -A
$ oc get agents -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter pods stuck in the
ContainerCreatingstate. This issue occurs because theopenshift-service-ca-operatorresource cannot generate themetrics-tlssecret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server. To resolve this issue, configure the DNS server settings for a dual stack network. If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster:
- You created a hosted cluster on the Agent platform through the multicluster engine for Kubernetes Operator console by using the default hosted cluster cluster namespace.
- You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name.
-
When you use the console or API to specify an IPv6 address for the
spec.services.servicePublishingStrategy.nodePort.addressfield of a hosted cluster, a full IPv6 address with 8 hextets is required. For example, instead of specifying2620:52:0:1306::30, you need to specify2620:52:0:1306:0:0:0:30.
1.1.4. General Availability and Technology Preview features Copier lienLien copié sur presse-papiers!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. For more information about the scope of support for these features, see Technology Preview Features Support Scope on the Red Hat Customer Portal.
For IBM Power and IBM Z, the following exceptions apply:
- For version 4.20 and later, you must run the control plane on machine types that are based on 64-bit x86 architecture or s390x architecture, and node pools on IBM Power or IBM Z.
- For version 4.19 and earlier, you must run the control plane on machine types that are based on 64-bit x86 architecture, and node pools on IBM Power or IBM Z.
| Feature | 4.18 | 4.19 | 4.20 |
|---|---|---|---|
| Hosted control planes for OpenShift Container Platform using non-bare-metal agent machines | Technology Preview | Technology Preview | Technology Preview |
| Hosted control planes for OpenShift Container Platform on RHOSP | Developer Preview | Technology Preview | Technology Preview |
| Custom taints and tolerations | Technology Preview | Technology Preview | Technology Preview |
| NVIDIA GPU devices on hosted control planes for OpenShift Virtualization | Technology Preview | Technology Preview | Technology Preview |
| Hosted control planes on IBM Z in a disconnected environment | Technology Preview | Technology Preview | Generally Available |