Este conteúdo não está disponível no idioma selecionado.

Chapter 14. Networking for hosted control planes


Ensure optimal performance with hosted control planes by configuring network settings. Those settings include internal subnets and proxy support for control-plane workloads, compute nodes, management clusters, and hosted clusters.

14.1. Ingress and egress requirements for hosted control planes

Specific network ports must be open for communication between the management cluster, the hosted control planes components, and the compute nodes. The ports are categorized into ingress ports, which involve incoming traffic to hosted control planes and egress ports, which involve outgoing traffic from hosted control planes.

14.1.1. Ingress requirements for hosted control planes

Ingress ports involve incoming traffic to hosted control planes. Ensure the correct ports are open for communication between the management cluster, the hosted control planes components, and the compute nodes.

The following table details the ports for incoming traffic to hosted control planes across all platforms:

Expand
Table 14.1. Common ingress ports
PortProtocolServiceDescriptionCode reference

6443

TCP

Kubernetes API server

Primary API server port for kubectl and cluster communication

support/config/constants.go:35 - KASSVCPort = 6443

9090

TCP

Ignition server

Port from compute nodes during the bootstrap process, NodePort or Route service publishing strategy

-

The service publishing strategy determines additional ports. The Ignition Proxy and Konnectivity services are exposed through one of the following service publishing strategies:

Route
This setting is the default on OpenShift Container Platform. Traffic flows through the OpenShift router on port 443. No additional firewall rules are needed beyond standard HTTPS.
NodePort
Direct access is required to port 8091 (Konnectivity) and port 8443 (Ignition Proxy).
LoadBalancer
Direct access is required to port 8091 (Konnectivity) through the cloud load balancer.

The following table details the ingress port configurations that are specific to each platform:

Expand
Table 14.2. Platform-specific ingress port configurations
PlatformPortServiceDescriptionCode reference

Agent

8443

Ignition Proxy

HTTPS proxy for ignition content delivery (NodePort publishing)

hypershift-operator/controllers/hostedcluster/network_policies.go:390

Agent

8081

Agent CAPI health probe

Health check endpoint for Agent platform CAPI provider

hypershift-operator/controllers/hostedcluster/internal/platform/agent.go:96,105,115

Agent

8080

Agent CAPI metrics

Metrics endpoint for Agent platform CAPI provider (binds to localhost only)

hypershift-operator/controllers/hostedcluster/internal/platform/agent/agent.go:97

AWS

9440

CAPI health check

Health and readiness probe endpoint for AWS CAPI provider

hypershift-operator/controllers/hostedcluster/internal/platform/aws/aws.go:222-223

Bare metal without the Agent platform

8443

Ignition Proxy

HTTPS proxy for ignition content delivery (NodePort publishing)

-

KubeVirt

9440

CAPI health check

Health and readiness probe endpoint

hypershift-operator/controllers/hostedcluster/internal/platform/kubevirt/kubevirt.go:140

RHOSP (Technology Preview)

9440

CAPI health check

Health and readiness probe endpoint

hypershift-operator/controllers/hostedcluster/internal/platform/openstack/openstack.go:238

RHOSP (Technology Preview)

8081

ORC health check

Health and readiness probe endpoint for OpenStack Resource Controller

hypershift-operator/controllers/hostedcluster/internal/platform/openstack/openstack.go:294,311

The following table details the ingress port configurations for private clusters, such as those on AWS:

Expand
Table 14.3. Ingress port configurations for private clusters
PortServiceDescriptionCode reference

8080

Private router HTTP

HTTP traffic through the private router

hypershift-operator/controllers/hostedcluster/network_policies.go:244

8443

Private router HTTPS

HTTPS traffic through the private router

hypershift-operator/controllers/hostedcluster/network_policies.go:245

14.1.2. Egress requirements for hosted control planes

Egress ports involve outgoing traffic from hosted control planes. Ensure the correct ports are open for communication between the management cluster, the hosted control planes components, and the compute nodes.

The following table details the ports that must be accessible for outgoing traffic from hosted control planes, across all platforms.

Expand
Table 14.4. Common egress ports
PortProtocolServicePurpose

443

TCP

HTTPS

OLM images, Ignition content, external HTTPS services

6443

TCP

Kubernetes API server

Communication with management cluster API

53

TCP and UDP

DNS

Standard DNS queries

Compute nodes require outbound network access to several hosted control planes services. The following table details the egress requirements for compute nodes.

Expand
Table 14.5. Compute node egress requirements
PortProtocolServicePurposeWhen required

443

TCP

HTTPS

Container registries, Ignition or Konnectivity service via Route service publishing strategy, external HTTPS services

Always

6443

TCP

Kubernetes API server

Cluster management and kubelet communication

Always

8091

TCP

Konnectivity server

Establishes a reverse tunnel for control plane access

NodePort or LoadBalancer publishing only

8443

TCP

Ignition Proxy

Retrieves bootstrap configuration

NodePort publishing only for Agent platform or bare metal

53

TCP and UDP

DNS

Name resolution

Always

14.1.3. Example firewall configuration

Review an example of what the firewall configuration looks like for a typical hosted control planes on AWS deployment that uses Route service publishing.

Ingress rules
  • Port 6443/TCP: Kubernetes API server, from compute nodes and external clients
  • Port 443/TCP: OpenShift Router for Ignition or Konnectivity routes, from compute nodes
Egress rules
  • Port 443/TCP: HTTPS, to container registries, routes, and external services
  • Port 6443/TCP: Management cluster API, to management cluster
  • Port 53/TCP and UDP: DNS, to DNS servers

If you use NodePort or LoadBalancer service publishing instead of Route service publishing, the following rules apply:

  • Port 8091/TCP: Konnectivity server, from compute nodes
  • Port 8443/TCP: Ignition Proxy, from compute nodes during the bootstrap process, NodePort publishing strategy only
  • Port 9090/TCP: Ignition server, from compute nodes during the bootstrap process, NodePort publishing strategy only

14.2. Configuring internal OVN IPv4 subnets for hosted clusters

In hosted clusters, you can configure internal OVN subnets to avoid routing conflicts, customize network architecture, or enable virtual private cloud (VPC) peering.

Avoid CIDR conflicts
Connect VPCs that host Red Hat OpenShift Service on AWS clusters with other VPCs that use the default OVN internal subnets of 100.88.0.0/16 and 100.64.0.0/16.
Customize network architecture
Configure internal OVN subnets to align with your corporate network policies.
Enable VPC peering
Deploy hosted clusters in environments where default subnets conflict with peered networks.

To configure OVN-internal subnets, you expose two OVN-Kubernetes internal subnet configuration options:

internalJoinSubnet
Internal subnet used by OVN-Kubernetes for the join network (default: 100.64.0.0/16)
internalTransitSwitchSubnet
Internal subnet used for the distributed transit switch in OVN Interconnect architecture (default: 100.88.0.0/16)

You can configure internal OVN subnets in an existing hosted cluster or configure the subnets while you create a hosted cluster.

Prerequisites

  • Your hosted cluster version must be OpenShift Container Platform 4.20 or later.
  • For the network type, your hosted cluster must use networkType: OVNKubernetes.
  • Custom subnets must not overlap with the following subnets:

    • Machine CIDRs
    • Service CIDRs
    • Cluster network CIDRs
    • Any other networks in your infrastructure

Procedure

  • To configure internal OVN subnets while you create a hosted cluster, in the configuration file for the hosted cluster, include the following section:

    apiVersion: hypershift.openshift.io/v1beta1
    kind: HostedCluster
    metadata:
      name: <hosted_cluster_name>
      namespace: <hosted_control_plane_namespace>
    spec:
      networking:
        networkType: OVNKubernetes
        machineCIDR: 10.0.0.0/16
        serviceCIDR: 172.30.0.0/16
        clusterNetwork:
        - cidr: 10.128.0.0/14
      operatorConfiguration:
        clusterNetworkOperator:
          ovnKubernetesConfig:
            ipv4:
              internalJoinSubnet: "100.99.0.0/16"
              internalTransitSwitchSubnet: "100.69.0.0/16"

    where:

    metadata
    Specifies the name of the hosted cluster and the name of the hosted control plane namespace.
    spec.operatorConfiguration.clusterNetworkOperator.ovnKubernetesConfig.ipv4
    Specifies the subnets to use. Both subnet fields in this section must be in a valid IPv4 CIDR notation, such as 192.168.1.0/24. The prefix range is /0 to /30, inclusive. The first octet cannot be 0, and the string length must be 9-18 characters. The subnet fields cannot use the same value. The subnet must be large enough to accommodate one IP address per node in the cluster. When you plan subnet size, consider future cluster growth. If you omit these fields, the default value for the internalJoinSubnet field is 100.64.0.0/16, and the default value for the internalTransitSwitchSubnet field is 100.88.0.0/16.

    For full details about creating a hosted cluster, see "Creating a hosted cluster by using the CLI".

  • To configure internal OVN subnets in an existing hosted cluster, enter the following command:

    Important

    When you make this change to an existing hosted cluster, the ovnkube-node DaemonSet is rolled out and the OVN components on compute nodes are restarted. During this process, you might experience brief network disruptions.

    $ oc patch hostedcluster <hosted_cluster_name> \
      -n <hosted_control_plane_namespace> \
      --type=merge \
      -p '{
        "spec": {
          "operatorConfiguration": {
            "clusterNetworkOperator": {
              "ovnKubernetesConfig": {
                "ipv4": {
                  "internalJoinSubnet": "100.99.0.0/16",
                  "internalTransitSwitchSubnet": "100.69.0.0/16"
                }
              }
            }
          }
        }
      }'

    where:

    metadata
    Specifies the name of the hosted cluster and the name of the hosted control plane namespace.
    spec.operatorConfiguration.clusterNetworkOperator.ovnKubernetesConfig.ipv4
    Specifies the subnets to use. Both subnet fields in this section must be in a valid IPv4 CIDR notation, such as 192.168.1.0/24. The prefix range is /0 to /30, inclusive. The first octet cannot be 0, and the string length must be 9-18 characters. The subnet fields cannot use the same value. The subnet must be large enough to accommodate one IP address per node in the cluster. When you plan subnet size, consider future cluster growth. If you omit these fields, the default value for the internalJoinSubnet field is 100.64.0.0/16, and the default value for the internalTransitSwitchSubnet field is 100.88.0.0/16.

Verification

  1. Verify that the hosted configuration is correct by entering the following command:

    $ oc get hostedcluster <hosted_cluster_name> -n <hosted_control_plane_namespace> \
      -o jsonpath='{.spec.operatorConfiguration.clusterNetworkOperator.ovnKubernetesConfig}' | jq .

    Example output

    {
      "ipv4": {
        "internalJoinSubnet": "100.99.0.0/16",
        "internalTransitSwitchSubnet": "100.69.0.0/16"
      }
    }

  2. Check the Network Operator configuration in the hosted cluster:

    1. Extract the hosted cluster kubeconfig file by entering the following command:

      $ oc extract secret/<hosted_cluster_name>-admin-kubeconfig \
        -n <hosted_control_plane_namespace> --to=- > <hosted_cluster_kubeconfig_file>
    2. Verify the Network Operator configuration by entering the following command:

      $ oc get network.operator.openshift.io cluster \
        --kubeconfig=<hosted_cluster_kubeconfig_file> \
        -o jsonpath='{.spec.defaultNetwork.ovnKubernetesConfig.ipv4}' | jq .

      Example output

      {
        "internalJoinSubnet": "100.99.0.0/16",
        "internalTransitSwitchSubnet": "100.69.0.0/16"
      }

  3. Create 2 test pods by completing the following steps:

    1. In node 1, create pod 1, as shown in the following example:

      kind: Pod
      apiVersion: v1
        metadata:
          name: "<pod_1>"
          namespace: "<hosted_control_plane_namespace>"
          labels:
            name: <pod_name>
        spec:
          securityContext:
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          containers:
          - image: "<image_url>"
            name: <pod_name>
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop: ["ALL"]
          nodeName: "${NODE1}"
    2. In node 2, create pod 2, as shown in the following example:

      kind: Pod
      apiVersion: v1
        metadata:
          name: "<pod_2>"
          namespace: "<hosted_control_plane_namespace>"
          labels:
            name: <pod_name>
        spec:
          securityContext:
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          containers:
          - image: "<image_url>"
            name: <pod_name>
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop: ["ALL"]
          nodeName: "${NODE2}"
  4. Create a test service that backs up both pods, as shown in the following example:

    kind: Service
    apiVersion: v1
      metadata:
        name: "<test_service_name"
        namespace: "<hosted_control_plane_namespace>"
        labels:
          name: test-service
      spec:
        internalTrafficPolicy: "Cluster"
        externalTrafficPolicy: ""
        ipFamilyPolicy: "SingleStack"
        ports:
        - name: http
          port: <service_test_port_number>
          protocol: "TCP"
          targetPort: 8080
        selector:
          name: "<pod_name>"
        type: "ClusterIP"
  5. Verify that the OVN pods are running:

    1. Enter the following command:

      $ oc rollout status daemonset/ovnkube-node \
        -n openshift-ovn-kubernetes \
        --kubeconfig=<hosted_cluster_kubeconfig_file> \
        --timeout=5m
    2. Enter the following command:

      $ oc get pods -n openshift-ovn-kubernetes --kubeconfig=<hosted_cluster_kubeconfig_file>

      All ovnkube-node pods should be in Running state with all containers ready.

  6. Make sure that the changes synchronized to the Network Operator by entering the following command:

    $ oc get network.operator.openshift.io/cluster \
      -ojsonpath='{.spec.defaultNetwork.ovnKubernetesConfig.ipv4}' \
      --kubeconfig=<clusters-hostedclustername> | jq .
  7. Get the IP address of pod 2 and transfer it from pod 1:

    1. Enter the following command:

      $ pod2_ip=oc get pod -n e2e-test-networking-ovnkubernetes-xxt8s <pod_2> -o=jsonpath={.status.podIPs[0].ip}
    2. Enter the following command:

      $ oc exec <pod_1> -- /bin/sh -x -c curl --connect-timeout 5 -s <pod2_ip>:8080
  8. Get the service IP address and verify that the pod can be visited externally from a service:

    1. Enter the following command:

      $ SERVICE_IP=oc get service test-service-o=jsonpath={.spec.clusterIPs[0]}
    2. Enter the following command:

      $ oc exec <pod_1> -- /bin/sh -x -c curl --connect-timeout 5 -s $SERVICE_IP:<service_test_port_number>

14.3. Proxy support for hosted control planes

To ensure that control-plane workloads, compute nodes, management clusters, and hosted clusters have the access they need for optimal performance, you can configure proxy support.

In standalone OpenShift Container Platform, the primary purposes of proxy support are ensuring that workloads in the cluster are configured to use the HTTP or HTTPS proxy to access external services, honoring the NO_PROXY setting if one is configured, and accepting any trust bundle that is configured for the proxy.

In hosted control planes, proxy support includes use cases beyond those in standalone OpenShift Container Platform.

14.3.1. Control plane workloads that need to access external services

Operators that run in the control plane need to access external services through the proxy that is configured for the hosted cluster. The proxy is usually accessible only through the data plane. The control plane workloads are as follows:

  • The Control Plane Operator needs to validate and obtain endpoints from certain identity providers when it creates the OAuth server configuration.
  • The OAuth server needs non-LDAP identity provider access.
  • The OpenShift API server handles image registry metadata import.
  • The Ingress Operator needs access to validate external canary routes.
  • You must open the firewall port 53 on Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) to allow the Domain Name Service (DNS) protocol to work as expected.

In a hosted cluster, you must send traffic that originates from the Control Plane Operator, Ingress Operator, OAuth server, and OpenShift API server pods through the data plane to the configured proxy and then to its final destination.

Note

Some operations are not possible when a hosted cluster is reduced to zero compute nodes; for example, when you import OpenShift image streams from a registry that requires proxy access.

14.3.2. Compute nodes that need to access an ignition endpoint

When compute nodes need a proxy to access the ignition endpoint, you must configure the proxy in the user-data stub that is configured on the compute node when it is created. For cases where machines need a proxy to access the ignition URL, the proxy configuration is included in the stub.

The stub resembles the following example:

---
{"ignition":{"config":{"merge":[{"httpHeaders":[{"name":"Authorization","value":"Bearer ..."},{"name":"TargetConfigVersionHash","value":"a4c1b0dd"}],"source":"https://ignition.controlplanehost.example.com/ignition","verification":{}}],"replace":{"verification":{}}},"proxy":{"httpProxy":"http://proxy.example.org:3128", "httpsProxy":"https://proxy.example.org:3129", "noProxy":"host.example.org"},"security":{"tls":{"certificateAuthorities":[{"source":"...","verification":{}}]}},"timeouts":{},"version":"3.2.0"},"passwd":{},"storage":{},"systemd":{}}
---

14.3.3. Compute nodes that need to access the API server

This use case is relevant to self-managed hosted control planes, not to Red Hat OpenShift Service on AWS with hosted control planes.

For communication with the control plane, hosted control planes uses a local proxy in every compute node that listens on IP address 172.20.0.1 and forwards traffic to the API server. If an external proxy is required to access the API server, that local proxy needs to use the external proxy to send traffic out. When a proxy is not needed, hosted control planes uses haproxy for the local proxy, which only forwards packets via TCP. When a proxy is needed, hosted control planes uses a custom proxy, control-plane-operator-kubernetes-default-proxy, to send traffic through the external proxy.

14.3.4. Management clusters that need external access

The HyperShift Operator has a controller that monitors the OpenShift global proxy configuration of the management cluster and sets the proxy environment variables on its own deployment. Control plane deployments that need external access are configured with the proxy environment variables of the management cluster.

If a management cluster uses a proxy configuration and you are configuring a hosted cluster with a secondary network but are not attaching the default pod network, add the CIDR of the secondary network to the proxy configuration. Specifically, you need to add the CIDR of the secondary network to the noProxy section of the proxy configuration for the management cluster. Otherwise, the Kubernetes API server will route some API requests through the proxy. In the hosted cluster configuration, the CIDR of the secondary network is automatically added to the noProxy section.

Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2026 Red Hat
Voltar ao topo