Chapter 1. Hosted control planes release notes
Release notes contain information about new and deprecated features, changes, and known issues.
With this release, hosted control planes for OpenShift Container Platform 4.17 is available. Hosted control planes for OpenShift Container Platform 4.17 supports the multicluster engine for Kubernetes Operator version 2.7.
1.1. New features and enhancements
This release adds improvements related to the following concepts:
1.1.1. Custom taints and tolerations (Technology Preview)
For hosted control planes on OpenShift Virtualization, you can now apply tolerations to hosted control plane pods by using the hcp
CLI -tolerations
argument or by using the hc.Spec.Tolerations
API file. This feature is available as a Technology Preview feature. For more information, see Custom taints and tolerations.
1.1.2. Support for NVIDIA GPU devices on OpenShift Virtualization (Technology Preview)
For hosted control planes on OpenShift Virtualization, you can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools. This feature is available as a Technology Preview feature. For more information, see Attaching NVIDIA GPU devices by using the hcp CLI and Attaching NVIDIA GPU devices by using the NodePool resource.
1.1.3. Support for tenancy on AWS
When you create a hosted cluster on AWS, you can indicate whether the EC2 instance should run on shared or single-tenant hardware. For more information, see Creating a hosted cluster on AWS.
1.1.4. Support for OpenShift Container Platform versions in hosted clusters
You can deploy a range of supported OpenShift Container Platform versions in a hosted cluster. For more information, see Supported OpenShift Container Platform versions in a hosted cluster.
1.1.5. Hosted control planes on OpenShift Virtualization in a disconnected environment is Generally Available
In this release, hosted control planes on OpenShift Virtualization in a disconnected environment is Generally Available. For more information, see Deploying hosted control planes on OpenShift Virtualization in a disconnected environment.
1.1.6. Hosted control planes for an ARM64 OpenShift Container Platform cluster on AWS is Generally Available
In this release, hosted control planes for an ARM64 OpenShift Container Platform cluster on AWS is Generally Available. For more information, see Running hosted clusters on an ARM64 architecture.
1.1.7. Hosted control planes on IBM Z is Generally Available
In this release, hosted control planes on IBM Z is Generally Available. For more information, see Deploying hosted control planes on IBM Z.
1.1.8. Hosted control planes on IBM Power is Generally Available
In this release, hosted control planes on IBM Power is Generally Available. For more information, see Deploying hosted control planes on IBM Power.
1.2. Bug fixes
-
Previously, when a hosted cluster proxy was configured and it used an identity provider (IDP) that had an HTTP or HTTPS endpoint, the hostname of the IDP was unresolved before sending it through the proxy. Consequently, hostnames that could only be resolved by the data plane failed to resolve for IDPs. With this update, a DNS lookup is performed before sending IPD traffic through the
konnectivity
tunnel. As a result, IDPs with hostnames that can only be resolved by the data plane can be verified by the Control Plane Operator. (OCPBUGS-41371) -
Previously, when the hosted cluster
controllerAvailabilityPolicy
was set toSingleReplica
,podAntiAffinity
on networking components blocked the availability of the components. With this release, the issue is resolved. (OCPBUGS-39313) -
Previously, the
AdditionalTrustedCA
that was specified in the hosted cluster image configuration was not reconciled into theopenshift-config
namespace, as expected by theimage-registry-operator
, and the component did not become available. With this release, the issue is resolved. (OCPBUGS-39225) - Previously, Red Hat HyperShift periodic conformance jobs failed because of changes to the core operating system. These failed jobs caused the OpenShift API deployment to fail. With this release, an update recursively copies individual trusted certificate authority (CA) certificates instead of copying a single file, so that the periodic conformance jobs succeed and the OpenShift API runs as expected. (OCPBUGS-38941)
-
Previously, the Konnectivity proxy agent in a hosted cluster always sent all TCP traffic through an HTTP/S proxy. It also ignored host names in the
NO_PROXY
configuration because it only received resolved IP addresses in its traffic. As a consequence, traffic that was not meant to be proxied, such as LDAP traffic, was proxied regardless of configuration. With this release, proxying is completed at the source (control plane) and the Konnectivity agent proxying configuration is removed. As a result, traffic that is not meant to be proxied, such as LDAP traffic, is not proxied anymore. TheNO_PROXY
configuration that includes host names is honored. (OCPBUGS-38637) -
Previously, the
azure-disk-csi-driver-controller
image was not getting appropriate override values when usingregistryOverride
. This was intentional so as to avoid propagating the values to theazure-disk-csi-driver
data plane images. With this update, the issue is resolved by adding a separate image override value. As a result, theazure-disk-csi-driver-controller
can be used withregistryOverride
and no longer affectsazure-disk-csi-driver
data plane images. (OCPBUGS-38183) - Previously, the AWS cloud controller manager within a hosted control plane that was running on a proxied management cluster would not use the proxy for cloud API communication. With this release, the issue is fixed. (OCPBUGS-37832)
Previously, proxying for Operators that run in the control plane of a hosted cluster was performed through proxy settings on the Konnectivity agent pod that runs in the data plane. It was not possible to distinguish if proxying was needed based on application protocol.
For parity with OpenShift Container Platform, IDP communication via HTTPS or HTTP should be proxied, but LDAP communication should not be proxied. This type of proxying also ignores
NO_PROXY
entries that rely on host names because by the time traffic reaches the Konnectivity agent, only the destination IP address is available.With this release, in hosted clusters, proxy is invoked in the control plane through
konnectivity-https-proxy
andkonnectivity-socks5-proxy
, and proxying traffic is stopped from the Konnectivity agent. As a result, traffic that is destined for LDAP servers is no longer proxied. Other HTTPS or HTTPS traffic is proxied correctly. TheNO_PROXY
setting is honored when you specify hostnames. (OCPBUGS-37052)Previously, proxying for IDP communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname were no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying (
http/s
) and protocols that do not (ldap://
). In addition, it did not honor theno_proxy
variable that is configured in theHostedCluster.spec.configuration.proxy
spec.With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your
no_proxy
settings. As a result, the OAUTH server can communicate properly with identity providers when a proxy is configured for the hosted cluster. (OCPBUGS-36932)-
Previously, the Hosted Cluster Config Operator (HCCO) did not delete the
ImageDigestMirrorSet
CR (IDMS) after you removed theImageContentSources
field from theHostedCluster
object. As a consequence, the IDMS persisted in theHostedCluster
object when it should not. With this release, the HCCO manages the deletion of IDMS resources from theHostedCluster
object. (OCPBUGS-34820) -
Previously, deploying a
hostedCluster
in a disconnected environment required setting thehypershift.openshift.io/control-plane-operator-image
annotation. With this update, the annotation is no longer needed. Additionally, the metadata inspector works as expected during the hosted Operator reconciliation, andOverrideImages
is populated as expected. (OCPBUGS-34734) - Previously, hosted clusters on AWS leveraged their VPC’s primary CIDR range to generate security group rules on the data plane. As a consequence, if you installed a hosted cluster into an AWS VPC with multiple CIDR ranges, the generated security group rules could be insufficient. With this update, security group rules are generated based on the provided machine CIDR range to resolve this issue. (OCPBUGS-34274)
- Previously, the OpenShift Cluster Manager container did not have the right TLS certificates. As a consequence, you could not use image streams in disconnected deployments. With this release, the TLS certificates are added as projected volumes to resolve this issue. (OCPBUGS-31446)
- Previously, the bulk destroy option in the multicluster engine for Kubernetes Operator console for OpenShift Virtualization did not destroy a hosted cluster. With this release, this issue is resolved. (ACM-10165)
1.3. Known issues
-
If the annotation and the
ManagedCluster
resource name do not match, the multicluster engine for Kubernetes Operator console displays the cluster asPending import
. The cluster cannot be used by the multicluster engine Operator. The same issue happens when there is no annotation and theManagedCluster
name does not match theInfra-ID
value of theHostedCluster
resource. - When you use the multicluster engine for Kubernetes Operator console to add a new node pool to an existing hosted cluster, the same version of OpenShift Container Platform might appear more than once in the list of options. You can select any instance in the list for the version that you want.
When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a
Ready
state. You can verify the number of nodes in two ways:- In the console, go to the node pool and verify that it has 0 nodes.
On the command-rline interface, run the following commands:
Verify that 0 nodes are in the node pool by running the following command:
$ oc get nodepool -A
Verify that 0 nodes are in the cluster by running the following command:
$ oc get nodes --kubeconfig
Verify that 0 agents are reported as bound to the cluster by running the following command:
$ oc get agents -A
When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues:
-
CrashLoopBackOff
state in theservice-ca-operator
pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in thekube-system
namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve. -
Pods stuck in
ContainerCreating
state: This issue occurs because theopenshift-service-ca-operator
cannot generate themetrics-tls
secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server. To resolve these issues, configure the DNS server settings for a dual stack network.
-
-
On the Agent platform, the hosted control planes feature periodically rotates the token that the Agent uses to pull ignition. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition. As a workaround, in the Agent specification, delete the secret of the
IgnitionEndpointTokenReference
property then add or modify any label on the Agent resource. The system re-creates the secret with the new token. If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster:
- You created a hosted cluster on the Agent platform through the multicluster engine for Kubernetes Operator console by using the default hosted cluster cluster namespace.
- You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name.
1.4. Generally Available and Technology Preview features
Features which are Generally Available (GA) are fully supported and are suitable for production use. Technology Preview (TP) features are experimental features and are not intended for production use. For more information about TP features, see the Technology Preview scope of support on the Red Hat Customer Portal.
See the following table to know about hosted control planes GA and TP features:
Feature | 4.15 | 4.16 | 4.17 |
---|---|---|---|
Hosted control planes for OpenShift Container Platform on Amazon Web Services (AWS) | Technology Preview | Generally Available | Generally Available |
Hosted control planes for OpenShift Container Platform on bare metal | General Availability | General Availability | General Availability |
Hosted control planes for OpenShift Container Platform on OpenShift Virtualization | Generally Available | Generally Available | Generally Available |
Hosted control planes for OpenShift Container Platform using non-bare-metal agent machines | Technology Preview | Technology Preview | Technology Preview |
Hosted control planes for an ARM64 OpenShift Container Platform cluster on Amazon Web Services | Technology Preview | Technology Preview | Generally Available |
Hosted control planes for OpenShift Container Platform on IBM Power | Technology Preview | Technology Preview | Generally Available |
Hosted control planes for OpenShift Container Platform on IBM Z | Technology Preview | Technology Preview | Generally Available |
Hosted control planes for OpenShift Container Platform on RHOSP | Not Available | Not Available | Developer Preview |