이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 14. Networking for hosted control planes


For standalone OpenShift Container Platform, proxy support is mainly about ensuring that workloads in the cluster are configured to use the HTTP or HTTPS proxy to access external services, honoring the NO_PROXY setting if one is configured, and accepting any trust bundle that is configured for the proxy.

14.1. Ingress and egress requirements for hosted control planes

Specific network ports must be open for communication between the management cluster, the hosted control planes components, and the compute nodes. The ports are categorized into ingress ports, which involve incoming traffic to hosted control planes and egress ports, which involve outgoing traffic from hosted control planes.

14.1.1. Ingress requirements for hosted control planes

Ingress ports involve incoming traffic to hosted control planes. Ensure the correct ports are open for communication between the management cluster, the hosted control planes components, and the compute nodes.

The following table details the ports for incoming traffic to hosted control planes across all platforms:

Expand
Table 14.1. Common ingress ports
PortProtocolServiceDescriptionCode reference

6443

TCP

Kubernetes API server

Primary API server port for kubectl and cluster communication

support/config/constants.go:35 - KASSVCPort = 6443

9090

TCP

Ignition server

Port from compute nodes during the bootstrap process, NodePort or Route service publishing strategy

-

The service publishing strategy determines additional ports. The Ignition Proxy and Konnectivity services are exposed through one of the following service publishing strategies:

Route
This setting is the default on OpenShift Container Platform. Traffic flows through the OpenShift router on port 443. No additional firewall rules are needed beyond standard HTTPS.
NodePort
Direct access is required to port 8091 (Konnectivity) and port 8443 (Ignition Proxy).
LoadBalancer
Direct access is required to port 8091 (Konnectivity) through the cloud load balancer.

The following table details the ingress port configurations that are specific to each platform:

Expand
Table 14.2. Platform-specific ingress port configurations
PlatformPortServiceDescriptionCode reference

Agent

8443

Ignition Proxy

HTTPS proxy for ignition content delivery (NodePort publishing)

hypershift-operator/controllers/hostedcluster/network_policies.go:390

Agent

8081

Agent CAPI health probe

Health check endpoint for Agent platform CAPI provider

hypershift-operator/controllers/hostedcluster/internal/platform/agent.go:96,105,115

Agent

8080

Agent CAPI metrics

Metrics endpoint for Agent platform CAPI provider (binds to localhost only)

hypershift-operator/controllers/hostedcluster/internal/platform/agent/agent.go:97

AWS

9440

CAPI health check

Health and readiness probe endpoint for AWS CAPI provider

hypershift-operator/controllers/hostedcluster/internal/platform/aws/aws.go:222-223

Bare metal without the Agent platform

8443

Ignition Proxy

HTTPS proxy for ignition content delivery (NodePort publishing)

-

KubeVirt

9440

CAPI health check

Health and readiness probe endpoint

hypershift-operator/controllers/hostedcluster/internal/platform/kubevirt/kubevirt.go:140

RHOSP (Technology Preview)

9440

CAPI health check

Health and readiness probe endpoint

hypershift-operator/controllers/hostedcluster/internal/platform/openstack/openstack.go:238

RHOSP (Technology Preview)

8081

ORC health check

Health and readiness probe endpoint for OpenStack Resource Controller

hypershift-operator/controllers/hostedcluster/internal/platform/openstack/openstack.go:294,311

The following table details the ingress port configurations for private clusters, such as those on AWS:

Expand
Table 14.3. Ingress port configurations for private clusters
PortServiceDescriptionCode reference

8080

Private router HTTP

HTTP traffic through the private router

hypershift-operator/controllers/hostedcluster/network_policies.go:244

8443

Private router HTTPS

HTTPS traffic through the private router

hypershift-operator/controllers/hostedcluster/network_policies.go:245

14.1.2. Egress requirements for hosted control planes

Egress ports involve outgoing traffic from hosted control planes. Ensure the correct ports are open for communication between the management cluster, the hosted control planes components, and the compute nodes.

The following table details the ports that must be accessible for outgoing traffic from hosted control planes, across all platforms.

Expand
Table 14.4. Common egress ports
PortProtocolServicePurpose

443

TCP

HTTPS

OLM images, Ignition content, external HTTPS services

6443

TCP

Kubernetes API server

Communication with management cluster API

53

TCP and UDP

DNS

Standard DNS queries

Compute nodes require outbound network access to several hosted control planes services. The following table details the egress requirements for compute nodes.

Expand
Table 14.5. Compute node egress requirements
PortProtocolServicePurposeWhen required

443

TCP

HTTPS

Container registries, Ignition or Konnectivity service via Route service publishing strategy, external HTTPS services

Always

6443

TCP

Kubernetes API server

Cluster management and kubelet communication

Always

8091

TCP

Konnectivity server

Establishes a reverse tunnel for control plane access

NodePort or LoadBalancer publishing only

8443

TCP

Ignition Proxy

Retrieves bootstrap configuration

NodePort publishing only for Agent platform or bare metal

53

TCP and UDP

DNS

Name resolution

Always

14.1.3. Example firewall configuration

Review an example of what the firewall configuration looks like for a typical hosted control planes on AWS deployment that uses Route service publishing.

Ingress rules
  • Port 6443/TCP: Kubernetes API server, from compute nodes and external clients
  • Port 443/TCP: OpenShift Router for Ignition or Konnectivity routes, from compute nodes
Egress rules
  • Port 443/TCP: HTTPS, to container registries, routes, and external services
  • Port 6443/TCP: Management cluster API, to management cluster
  • Port 53/TCP and UDP: DNS, to DNS servers

If you use NodePort or LoadBalancer service publishing instead of Route service publishing, the following rules apply:

  • Port 8091/TCP: Konnectivity server, from compute nodes
  • Port 8443/TCP: Ignition Proxy, from compute nodes during the bootstrap process, NodePort publishing strategy only
  • Port 9090/TCP: Ignition server, from compute nodes during the bootstrap process, NodePort publishing strategy only

14.2. Proxy support for hosted control planes

To ensure that control-plane workloads, compute nodes, management clusters, and hosted clusters have the access they need for optimal performance, you can configure proxy support.

In standalone OpenShift Container Platform, the primary purposes of proxy support are ensuring that workloads in the cluster are configured to use the HTTP or HTTPS proxy to access external services, honoring the NO_PROXY setting if one is configured, and accepting any trust bundle that is configured for the proxy.

In hosted control planes, proxy support includes use cases beyond those in standalone OpenShift Container Platform.

14.2.1. Control plane workloads that need to access external services

Operators that run in the control plane need to access external services through the proxy that is configured for the hosted cluster. The proxy is usually accessible only through the data plane. The control plane workloads are as follows:

  • The Control Plane Operator needs to validate and obtain endpoints from certain identity providers when it creates the OAuth server configuration.
  • The OAuth server needs non-LDAP identity provider access.
  • The OpenShift API server handles image registry metadata import.
  • The Ingress Operator needs access to validate external canary routes.
  • You must open the firewall port 53 on Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) to allow the Domain Name Service (DNS) protocol to work as expected.

In a hosted cluster, you must send traffic that originates from the Control Plane Operator, Ingress Operator, OAuth server, and OpenShift API server pods through the data plane to the configured proxy and then to its final destination.

Note

Some operations are not possible when a hosted cluster is reduced to zero compute nodes; for example, when you import OpenShift image streams from a registry that requires proxy access.

14.2.2. Compute nodes that need to access an ignition endpoint

When compute nodes need a proxy to access the ignition endpoint, you must configure the proxy in the user-data stub that is configured on the compute node when it is created. For cases where machines need a proxy to access the ignition URL, the proxy configuration is included in the stub.

The stub resembles the following example:

---
{"ignition":{"config":{"merge":[{"httpHeaders":[{"name":"Authorization","value":"Bearer ..."},{"name":"TargetConfigVersionHash","value":"a4c1b0dd"}],"source":"https://ignition.controlplanehost.example.com/ignition","verification":{}}],"replace":{"verification":{}}},"proxy":{"httpProxy":"http://proxy.example.org:3128", "httpsProxy":"https://proxy.example.org:3129", "noProxy":"host.example.org"},"security":{"tls":{"certificateAuthorities":[{"source":"...","verification":{}}]}},"timeouts":{},"version":"3.2.0"},"passwd":{},"storage":{},"systemd":{}}
---

14.2.3. Compute nodes that need to access the API server

This use case is relevant to self-managed hosted control planes, not to Red Hat OpenShift Service on AWS with hosted control planes.

For communication with the control plane, hosted control planes uses a local proxy in every compute node that listens on IP address 172.20.0.1 and forwards traffic to the API server. If an external proxy is required to access the API server, that local proxy needs to use the external proxy to send traffic out. When a proxy is not needed, hosted control planes uses haproxy for the local proxy, which only forwards packets via TCP. When a proxy is needed, hosted control planes uses a custom proxy, control-plane-operator-kubernetes-default-proxy, to send traffic through the external proxy.

14.2.4. Management clusters that need external access

The HyperShift Operator has a controller that monitors the OpenShift global proxy configuration of the management cluster and sets the proxy environment variables on its own deployment. Control plane deployments that need external access are configured with the proxy environment variables of the management cluster.

If a management cluster uses a proxy configuration and you are configuring a hosted cluster with a secondary network but are not attaching the default pod network, add the CIDR of the secondary network to the proxy configuration. Specifically, you need to add the CIDR of the secondary network to the noProxy section of the proxy configuration for the management cluster. Otherwise, the Kubernetes API server will route some API requests through the proxy. In the hosted cluster configuration, the CIDR of the secondary network is automatically added to the noProxy section.

Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2026 Red Hat
맨 위로 이동