Chapter 1. Hosted control planes release notes
With this release, hosted control planes for OpenShift Container Platform 4.21 is available. Hosted control planes for OpenShift Container Platform 4.21 supports multicluster engine for Kubernetes Operator version 2.11.
1.1. New features and enhancements Copy linkLink copied to clipboard!
This release adds improvements related to the following components and concepts:
- ARM64 compute nodes now supported with 64-bit x86 control plane
- In this release, ARM64 compute nodes are supported with a 64-bit x86 control plane on bare-metal deployments of hosted control planes. For more information about multi-architecture support for hosted control planes, see the Support matrix for hosted control planes.
- Monitor connectivity between the control plane and the data plane
-
In this release, cluster service providers can monitor network activity between a hosted control plane and the compute nodes in a data plane by using the
DataPlaneConnectionAvailablecondition. For more information, see Connectivity monitoring from the control plane to the data plane. - Ingress endpoint configuration now supported
In this release, you can now configure the ingress endpoint for the hosted cluster, including the type, ports, and protocols. This option is available for hosted control planes on AWS, hosted control planes on bare metal, and hosted control planes on IBM Power. For more information, see the instructions to create a hosted cluster on your preferred platform:
1.2. Notable technical changes Copy linkLink copied to clipboard!
Review the following notable technical changes introduced in this release.
- Updated hosted cluster and node pool version skew policy
With this release, the version skew policy for compatibility between hosted cluster and node pools has been updated:
- Node pool versions up to 3 minor versions behind the hosted cluster version are supported.
- Node pool versions cannot be higher than the hosted cluster version.
For more information, see Hosted cluster and node pool version skew policy.
1.3. Fixed issues Copy linkLink copied to clipboard!
The following issues are fixed for this release:
- Before this update, deploying a hosted control plane on OpenShift Virtualization with IPv4 or IPv6 dual-stack networking failed because the Cluster Network Operator did not recognize KubeVirt as a supported platform for dual-stack. As a consequence, hosted clusters could not be deployed on OpenShift Virtualization with dual-stack networking. With this release, support is added for deploying a hosted control plane on OpenShift Virtualization with KubeVirt. The Cluster Network Operator (CNO) now recognizes KubeVirt as a supported platform for dual-stack, which enables the successful deployment of hosted control planes with IPv4/IPv6 dual-stack networking. This enhancement ensures a smoother deployment process for dual-stack networking configurations. (OCPBUGS-69941)
-
Before this update, the
GenerateNodePools()function of the CLI incorrectly setAzureMarketplacetonilwhen you specified the--image-generationflag without additional marketplace flags, which discarded your preference. Also, thenodepoolcontroller failed to setImageGenerationwhen creating images from the release payload, which caused them to default to Gen2. As a consequence, when users attempted to create Azure hosted clusters by using--image-generationGen1, theNodePoolswere incorrectly provisioned with Gen2 images, which ignored the explicit configuration. With this release, the CLI is modified to preserve your preference by creating a properAzureMarketplaceImagestructure, and thenodepoolcontroller explicitly sets the generation field based on the release payload (mapping Gen1 for HyperVGen1 and Gen2 for HyperVGen2). As a result, the` --image-generation` flag is now fully respected, which allows you to successfully deployNodePoolobjects with their chosen image generation without being overwritten by system defaults. (OCPBUGS-63613) -
Before this update, when a hosted cluster used an external DNS and the
PublicAndPrivateendpoint access type, theallowedCIDRBlocksparameter was applied to thekube-apiserverservice instead of the external routerLoadBalancerservice. Because external traffic to thekube-apiserverservice flows through the router when the external DNS is configured, the CIDR restrictions were not enforced and external access was unrestricted. With this update, theLoadBalancerSourceRangesconfiguration is applied to the external routerLoadBalancerservice. As a result, externalkube-apiserveraccess is properly restricted to the specifiedallowedCIDRBlocksvalues. (OCPBUGS-61941) -
Before this update, deploying hosted control planes 4.20 with user-supplied
ignition-server-serving-certandignition-server-ca-certsecrets, along with thedisable-pki-reconciliationannotation, caused the system to remove the user-supplied ignition secrets and theignition-serverpods to fail. With this release, theignition-serversecrets are preserved during reconciliation after removing the delete action for thedisable-pki-reconciliationannotation, ensuring that theignition-serverpods start up completely. (OCPBUGS-61776) -
Before this update, the hosted control plane (
hcp) CLI and control plane operator instantiated Azure SDK clients without passing cloud configuration options, which caused all clients to default to Azure Public Cloud. As a consequence, creating or managing hosted clusters in Azure Government Cloud or Azure China Cloud failed because the SDK clients could not connect to the correct cloud endpoints. With this update, all Azure SDK client instantiations use the cloud configuration specified in the hosted cluster platform settings. As a result, thehcpCLI and control plane operator correctly support Azure Government Cloud and Azure China Cloud in addition to Azure Public Cloud. (OCPBUGS-33372) Before this update, the following test failed more than expected:
TestExternalOIDCTechPreview/Main/[OCPFeatureGate:ExternalOIDCWithUIDAndExtraClaimMappings]_Test_external_OIDC_userInfo_Extra
TestExternalOIDCTechPreview/Main/[OCPFeatureGate:ExternalOIDCWithUIDAndExtraClaimMappings]_Test_external_OIDC_userInfo_ExtraCopy to Clipboard Copied! Toggle word wrap Toggle overflow As a consequence, the user experience was disrupted by a test failure in the external OIDC feature. With this release, the bug fix ensures that the
ExternalOIDCWithUIDAndExtraClaimMappingstest passes in version 4.20. As a result, the test failures in the external OIDC feature are fixed, improving user authentication in 4.20 and later versions. (OCPBUGS-63622)
1.4. Technology Preview features status Copy linkLink copied to clipboard!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
Technology Preview Features Support Scope
In the following table, features are marked with the following statuses:
- Not Available
- Technology Preview
- General Availability
- Deprecated
- Removed
For IBM Power and IBM Z, the following exceptions apply:
- For version 4.20 and later, you must run the control plane on machine types that are based on 64-bit x86 architecture or s390x architecture, and node pools on IBM Power or IBM Z.
- For version 4.19 and earlier, you must run the control plane on machine types that are based on 64-bit x86 architecture, and node pools on IBM Power or IBM Z.
| Feature | 4.19 | 4.20 | 4.21 |
|---|---|---|---|
| Hosted control planes for OpenShift Container Platform using non-bare-metal agent machines | Technology Preview | Technology Preview | Technology Preview |
| Hosted control planes for OpenShift Container Platform on RHOSP | Technology Preview | Technology Preview | Technology Preview |
| Custom taints and tolerations | Technology Preview | Technology Preview | Technology Preview |
| NVIDIA GPU devices on hosted control planes for OpenShift Virtualization | Technology Preview | Technology Preview | Technology Preview |
| Hosted control planes for OpenShift Virtualization on IBM Z | - | - | Technology Preview |
| Hosted control planes on IBM Z in a disconnected environment | Technology Preview | General Availability | General Availability |
Hosted control planes for OpenShift Virtualization on IBM Z is supported as Technology Preview starting with OpenShift Container Platform 4.21, multicluster engine for Kubernetes Operator 2.11, and Red Hat Advanced Cluster Management (RHACM) 2.16. Currently, only the default pod network is supported. Cluster upgrades are supported. The following features are not supported in this release: FIPS mode, disconnected environments, and autoscaling.
1.5. Known issues Copy linkLink copied to clipboard!
This section includes several known issues for OpenShift Container Platform 4.21.
-
If the annotation and the
ManagedClusterresource name do not match, the multicluster engine for Kubernetes Operator console displays the cluster asPending import. The cluster cannot be used by the multicluster engine Operator. The same issue happens when there is no annotation and theManagedClustername does not match theInfra-IDvalue of theHostedClusterresource. - When you use the multicluster engine for Kubernetes Operator console to add a new node pool to an existing hosted cluster, the same version of OpenShift Container Platform might appear more than once in the list of options. You can select any instance in the list for the version that you want.
When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a
Readystate. You can verify the number of nodes in two ways:- In the console, go to the node pool and verify that it has 0 nodes.
On the command-line interface, run the following commands:
Verify that 0 nodes are in the node pool by running the following command:
oc get nodepool -A
$ oc get nodepool -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that 0 nodes are in the cluster by running the following command:
oc get nodes --kubeconfig
$ oc get nodes --kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that 0 agents are reported as bound to the cluster by running the following command:
oc get agents -A
$ oc get agents -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter pods stuck in the
ContainerCreatingstate. This issue occurs because theopenshift-service-ca-operatorresource cannot generate themetrics-tlssecret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server. To resolve this issue, configure the DNS server settings for a dual stack network. If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster:
- You created a hosted cluster on the Agent platform through the multicluster engine for Kubernetes Operator console by using the default hosted cluster cluster namespace.
- You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name.
-
When you use the console or API to specify an IPv6 address for the
spec.services.servicePublishingStrategy.nodePort.addressfield of a hosted cluster, a full IPv6 address with 8 hextets is required. For example, instead of specifying2620:52:0:1306::30, you need to specify2620:52:0:1306:0:0:0:30. - In hosted control planes on OpenShift Virtualization, if you store all hosted cluster information in a shared namespace and then back up and restore a hosted cluster, you might unintentionally change other hosted clusters. To avoid this issue, back up and restore only hosted clusters that use labels, or avoid storing all hosted cluster information in a shared namespace.
-
For version 4.21, hosted control planes pins all Cluster API images to the
4.20.10-multirelease image for compatibility reasons. Hosted control planes pins the images when Cluster API deployments are generated. The4.20.10-multiimage must always be mirrored and available in order for the Cluster API to work with hosted control planes version 4.21.