이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 14. Networking for hosted control planes
For standalone OpenShift Container Platform, proxy support is mainly about ensuring that workloads in the cluster are configured to use the HTTP or HTTPS proxy to access external services, honoring the NO_PROXY setting if one is configured, and accepting any trust bundle that is configured for the proxy.
14.1. Ingress and egress requirements for hosted control planes 링크 복사링크가 클립보드에 복사되었습니다!
Specific network ports must be open for communication between the management cluster, the hosted control planes components, and the compute nodes. The ports are categorized into ingress ports, which involve incoming traffic to hosted control planes and egress ports, which involve outgoing traffic from hosted control planes.
14.1.1. Ingress requirements for hosted control planes 링크 복사링크가 클립보드에 복사되었습니다!
Ingress ports involve incoming traffic to hosted control planes. Ensure the correct ports are open for communication between the management cluster, the hosted control planes components, and the compute nodes.
The following table details the ports for incoming traffic to hosted control planes across all platforms:
| Port | Protocol | Service | Description | Code reference |
|---|---|---|---|---|
|
| TCP | Kubernetes API server |
Primary API server port for |
|
|
| TCP | Ignition server |
Port from compute nodes during the bootstrap process, | - |
The service publishing strategy determines additional ports. The Ignition Proxy and Konnectivity services are exposed through one of the following service publishing strategies:
Route- This setting is the default on OpenShift Container Platform. Traffic flows through the OpenShift router on port 443. No additional firewall rules are needed beyond standard HTTPS.
NodePort- Direct access is required to port 8091 (Konnectivity) and port 8443 (Ignition Proxy).
LoadBalancer- Direct access is required to port 8091 (Konnectivity) through the cloud load balancer.
The following table details the ingress port configurations that are specific to each platform:
| Platform | Port | Service | Description | Code reference |
|---|---|---|---|---|
| Agent |
| Ignition Proxy |
HTTPS proxy for ignition content delivery ( |
|
| Agent |
| Agent CAPI health probe | Health check endpoint for Agent platform CAPI provider |
|
| Agent |
| Agent CAPI metrics | Metrics endpoint for Agent platform CAPI provider (binds to localhost only) |
|
| AWS |
| CAPI health check | Health and readiness probe endpoint for AWS CAPI provider |
|
| Bare metal without the Agent platform |
| Ignition Proxy |
HTTPS proxy for ignition content delivery ( | - |
| KubeVirt |
| CAPI health check | Health and readiness probe endpoint |
|
| RHOSP (Technology Preview) |
| CAPI health check | Health and readiness probe endpoint |
|
| RHOSP (Technology Preview) |
| ORC health check | Health and readiness probe endpoint for OpenStack Resource Controller |
|
The following table details the ingress port configurations for private clusters, such as those on AWS:
| Port | Service | Description | Code reference |
|---|---|---|---|
|
| Private router HTTP | HTTP traffic through the private router |
|
|
| Private router HTTPS | HTTPS traffic through the private router |
|
14.1.2. Egress requirements for hosted control planes 링크 복사링크가 클립보드에 복사되었습니다!
Egress ports involve outgoing traffic from hosted control planes. Ensure the correct ports are open for communication between the management cluster, the hosted control planes components, and the compute nodes.
The following table details the ports that must be accessible for outgoing traffic from hosted control planes, across all platforms.
| Port | Protocol | Service | Purpose |
|---|---|---|---|
|
| TCP | HTTPS |
OLM images, |
|
| TCP | Kubernetes API server | Communication with management cluster API |
|
| TCP and UDP | DNS | Standard DNS queries |
Compute nodes require outbound network access to several hosted control planes services. The following table details the egress requirements for compute nodes.
| Port | Protocol | Service | Purpose | When required |
|---|---|---|---|---|
|
| TCP | HTTPS |
Container registries, | Always |
|
| TCP | Kubernetes API server | Cluster management and kubelet communication | Always |
|
| TCP | Konnectivity server | Establishes a reverse tunnel for control plane access |
|
|
| TCP | Ignition Proxy | Retrieves bootstrap configuration |
|
|
| TCP and UDP | DNS | Name resolution | Always |
14.1.3. Example firewall configuration 링크 복사링크가 클립보드에 복사되었습니다!
Review an example of what the firewall configuration looks like for a typical hosted control planes on AWS deployment that uses Route service publishing.
- Ingress rules
-
Port
6443/TCP: Kubernetes API server, from compute nodes and external clients -
Port
443/TCP: OpenShift Router for Ignition or Konnectivity routes, from compute nodes
-
Port
- Egress rules
-
Port
443/TCP: HTTPS, to container registries, routes, and external services -
Port
6443/TCP: Management cluster API, to management cluster -
Port
53/TCP and UDP: DNS, to DNS servers
-
Port
If you use NodePort or LoadBalancer service publishing instead of Route service publishing, the following rules apply:
-
Port
8091/TCP: Konnectivity server, from compute nodes -
Port
8443/TCP: Ignition Proxy, from compute nodes during the bootstrap process,NodePortpublishing strategy only -
Port
9090/TCP: Ignition server, from compute nodes during the bootstrap process,NodePortpublishing strategy only
14.2. Proxy support for hosted control planes 링크 복사링크가 클립보드에 복사되었습니다!
To ensure that control-plane workloads, compute nodes, management clusters, and hosted clusters have the access they need for optimal performance, you can configure proxy support.
In standalone OpenShift Container Platform, the primary purposes of proxy support are ensuring that workloads in the cluster are configured to use the HTTP or HTTPS proxy to access external services, honoring the NO_PROXY setting if one is configured, and accepting any trust bundle that is configured for the proxy.
In hosted control planes, proxy support includes use cases beyond those in standalone OpenShift Container Platform.
14.2.1. Control plane workloads that need to access external services 링크 복사링크가 클립보드에 복사되었습니다!
Operators that run in the control plane need to access external services through the proxy that is configured for the hosted cluster. The proxy is usually accessible only through the data plane. The control plane workloads are as follows:
- The Control Plane Operator needs to validate and obtain endpoints from certain identity providers when it creates the OAuth server configuration.
- The OAuth server needs non-LDAP identity provider access.
- The OpenShift API server handles image registry metadata import.
- The Ingress Operator needs access to validate external canary routes.
-
You must open the firewall port
53on Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) to allow the Domain Name Service (DNS) protocol to work as expected.
In a hosted cluster, you must send traffic that originates from the Control Plane Operator, Ingress Operator, OAuth server, and OpenShift API server pods through the data plane to the configured proxy and then to its final destination.
Some operations are not possible when a hosted cluster is reduced to zero compute nodes; for example, when you import OpenShift image streams from a registry that requires proxy access.
14.2.2. Compute nodes that need to access an ignition endpoint 링크 복사링크가 클립보드에 복사되었습니다!
When compute nodes need a proxy to access the ignition endpoint, you must configure the proxy in the user-data stub that is configured on the compute node when it is created. For cases where machines need a proxy to access the ignition URL, the proxy configuration is included in the stub.
The stub resembles the following example:
---
{"ignition":{"config":{"merge":[{"httpHeaders":[{"name":"Authorization","value":"Bearer ..."},{"name":"TargetConfigVersionHash","value":"a4c1b0dd"}],"source":"https://ignition.controlplanehost.example.com/ignition","verification":{}}],"replace":{"verification":{}}},"proxy":{"httpProxy":"http://proxy.example.org:3128", "httpsProxy":"https://proxy.example.org:3129", "noProxy":"host.example.org"},"security":{"tls":{"certificateAuthorities":[{"source":"...","verification":{}}]}},"timeouts":{},"version":"3.2.0"},"passwd":{},"storage":{},"systemd":{}}
---
14.2.3. Compute nodes that need to access the API server 링크 복사링크가 클립보드에 복사되었습니다!
This use case is relevant to self-managed hosted control planes, not to Red Hat OpenShift Service on AWS with hosted control planes.
For communication with the control plane, hosted control planes uses a local proxy in every compute node that listens on IP address 172.20.0.1 and forwards traffic to the API server. If an external proxy is required to access the API server, that local proxy needs to use the external proxy to send traffic out. When a proxy is not needed, hosted control planes uses haproxy for the local proxy, which only forwards packets via TCP. When a proxy is needed, hosted control planes uses a custom proxy, control-plane-operator-kubernetes-default-proxy, to send traffic through the external proxy.
14.2.4. Management clusters that need external access 링크 복사링크가 클립보드에 복사되었습니다!
The HyperShift Operator has a controller that monitors the OpenShift global proxy configuration of the management cluster and sets the proxy environment variables on its own deployment. Control plane deployments that need external access are configured with the proxy environment variables of the management cluster.
If a management cluster uses a proxy configuration and you are configuring a hosted cluster with a secondary network but are not attaching the default pod network, add the CIDR of the secondary network to the proxy configuration. Specifically, you need to add the CIDR of the secondary network to the noProxy section of the proxy configuration for the management cluster. Otherwise, the Kubernetes API server will route some API requests through the proxy. In the hosted cluster configuration, the CIDR of the secondary network is automatically added to the noProxy section.