Chapter 14. Networking for hosted control planes


Ensure optimal performance with hosted control planes by configuring network settings. Those settings include internal subnets and proxy support for control-plane workloads, compute nodes, management clusters, and hosted clusters.

In hosted clusters, you can configure internal OVN subnets to avoid routing conflicts, customize network architecture, or enable virtual private cloud (VPC) peering.

Avoid CIDR conflicts
Connect VPCs that host Red Hat OpenShift Service on AWS clusters with other VPCs that use the default OVN internal subnets of 100.88.0.0/16 and 100.64.0.0/16.
Customize network architecture
Configure internal OVN subnets to align with your corporate network policies.
Enable VPC peering
Deploy hosted clusters in environments where default subnets conflict with peered networks.

To configure OVN-internal subnets, you expose two OVN-Kubernetes internal subnet configuration options:

internalJoinSubnet
Internal subnet used by OVN-Kubernetes for the join network (default: 100.64.0.0/16)
internalTransitSwitchSubnet
Internal subnet used for the distributed transit switch in OVN Interconnect architecture (default: 100.88.0.0/16)

You can configure internal OVN subnets in an existing hosted cluster or configure the subnets while you create a hosted cluster.

Prerequisites

  • Your hosted cluster version must be OpenShift Container Platform 4.20 or later.
  • For the network type, your hosted cluster must use networkType: OVNKubernetes.
  • Custom subnets must not overlap with the following subnets:

    • Machine CIDRs
    • Service CIDRs
    • Cluster network CIDRs
    • Any other networks in your infrastructure

Procedure

  • To configure internal OVN subnets while you create a hosted cluster, in the configuration file for the hosted cluster, include the following section:

    apiVersion: hypershift.openshift.io/v1beta1
    kind: HostedCluster
    metadata:
      name: <hosted_cluster_name>
      namespace: <hosted_control_plane_namespace>
    spec:
      networking:
        networkType: OVNKubernetes
        machineCIDR: 10.0.0.0/16
        serviceCIDR: 172.30.0.0/16
        clusterNetwork:
        - cidr: 10.128.0.0/14
      operatorConfiguration:
        clusterNetworkOperator:
          ovnKubernetesConfig:
            ipv4:
              internalJoinSubnet: "100.99.0.0/16"
              internalTransitSwitchSubnet: "100.69.0.0/16"
    Copy to Clipboard Toggle word wrap

    where:

    metadata
    Specifies the name of the hosted cluster and the name of the hosted control plane namespace.
    spec.operatorConfiguration.clusterNetworkOperator.ovnKubernetesConfig.ipv4
    Specifies the subnets to use. Both subnet fields in this section must be in a valid IPv4 CIDR notation, such as 192.168.1.0/24. The prefix range is /0 to /30, inclusive. The first octet cannot be 0, and the string length must be 9-18 characters. The subnet fields cannot use the same value. The subnet must be large enough to accommodate one IP address per node in the cluster. When you plan subnet size, consider future cluster growth. If you omit these fields, the default value for the internalJoinSubnet field is 100.64.0.0/16, and the default value for the internalTransitSwitchSubnet field is 100.88.0.0/16.

    For full details about creating a hosted cluster, see "Creating a hosted cluster by using the CLI".

  • To configure internal OVN subnets in an existing hosted cluster, enter the following command:

    Important

    When you make this change to an existing hosted cluster, the ovnkube-node DaemonSet is rolled out and the OVN components on compute nodes are restarted. During this process, you might experience brief network disruptions.

    $ oc patch hostedcluster <hosted_cluster_name> \
      -n <hosted_control_plane_namespace> \
      --type=merge \
      -p '{
        "spec": {
          "operatorConfiguration": {
            "clusterNetworkOperator": {
              "ovnKubernetesConfig": {
                "ipv4": {
                  "internalJoinSubnet": "100.99.0.0/16",
                  "internalTransitSwitchSubnet": "100.69.0.0/16"
                }
              }
            }
          }
        }
      }'
    Copy to Clipboard Toggle word wrap

    where:

    metadata
    Specifies the name of the hosted cluster and the name of the hosted control plane namespace.
    spec.operatorConfiguration.clusterNetworkOperator.ovnKubernetesConfig.ipv4
    Specifies the subnets to use. Both subnet fields in this section must be in a valid IPv4 CIDR notation, such as 192.168.1.0/24. The prefix range is /0 to /30, inclusive. The first octet cannot be 0, and the string length must be 9-18 characters. The subnet fields cannot use the same value. The subnet must be large enough to accommodate one IP address per node in the cluster. When you plan subnet size, consider future cluster growth. If you omit these fields, the default value for the internalJoinSubnet field is 100.64.0.0/16, and the default value for the internalTransitSwitchSubnet field is 100.88.0.0/16.

Verification

  1. Verify that the hosted configuration is correct by entering the following command:

    $ oc get hostedcluster <hosted_cluster_name> -n <hosted_control_plane_namespace> \
      -o jsonpath='{.spec.operatorConfiguration.clusterNetworkOperator.ovnKubernetesConfig}' | jq .
    Copy to Clipboard Toggle word wrap

    Example output

    {
      "ipv4": {
        "internalJoinSubnet": "100.99.0.0/16",
        "internalTransitSwitchSubnet": "100.69.0.0/16"
      }
    }
    Copy to Clipboard Toggle word wrap

  2. Check the Network Operator configuration in the hosted cluster:

    1. Extract the hosted cluster kubeconfig file by entering the following command:

      $ oc extract secret/<hosted_cluster_name>-admin-kubeconfig \
        -n <hosted_control_plane_namespace> --to=- > <hosted_cluster_kubeconfig_file>
      Copy to Clipboard Toggle word wrap
    2. Verify the Network Operator configuration by entering the following command:

      $ oc get network.operator.openshift.io cluster \
        --kubeconfig=<hosted_cluster_kubeconfig_file> \
        -o jsonpath='{.spec.defaultNetwork.ovnKubernetesConfig.ipv4}' | jq .
      Copy to Clipboard Toggle word wrap

      Example output

      {
        "internalJoinSubnet": "100.99.0.0/16",
        "internalTransitSwitchSubnet": "100.69.0.0/16"
      }
      Copy to Clipboard Toggle word wrap

  3. Create 2 test pods by completing the following steps:

    1. In node 1, create pod 1, as shown in the following example:

      kind: Pod
      apiVersion: v1
        metadata:
          name: "<pod_1>"
          namespace: "<hosted_control_plane_namespace>"
          labels:
            name: <pod_name>
        spec:
          securityContext:
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          containers:
          - image: "<image_url>"
            name: <pod_name>
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop: ["ALL"]
          nodeName: "${NODE1}"
      Copy to Clipboard Toggle word wrap
    2. In node 2, create pod 2, as shown in the following example:

      kind: Pod
      apiVersion: v1
        metadata:
          name: "<pod_2>"
          namespace: "<hosted_control_plane_namespace>"
          labels:
            name: <pod_name>
        spec:
          securityContext:
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          containers:
          - image: "<image_url>"
            name: <pod_name>
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop: ["ALL"]
          nodeName: "${NODE2}"
      Copy to Clipboard Toggle word wrap
  4. Create a test service that backs up both pods, as shown in the following example:

    kind: Service
    apiVersion: v1
      metadata:
        name: "<test_service_name"
        namespace: "<hosted_control_plane_namespace>"
        labels:
          name: test-service
      spec:
        internalTrafficPolicy: "Cluster"
        externalTrafficPolicy: ""
        ipFamilyPolicy: "SingleStack"
        ports:
        - name: http
          port: <service_test_port_number>
          protocol: "TCP"
          targetPort: 8080
        selector:
          name: "<pod_name>"
        type: "ClusterIP"
    Copy to Clipboard Toggle word wrap
  5. Verify that the OVN pods are running:

    1. Enter the following command:

      $ oc rollout status daemonset/ovnkube-node \
        -n openshift-ovn-kubernetes \
        --kubeconfig=<hosted_cluster_kubeconfig_file> \
        --timeout=5m
      Copy to Clipboard Toggle word wrap
    2. Enter the following command:

      $ oc get pods -n openshift-ovn-kubernetes --kubeconfig=<hosted_cluster_kubeconfig_file>
      Copy to Clipboard Toggle word wrap

      All ovnkube-node pods should be in Running state with all containers ready.

  6. Make sure that the changes synchronized to the Network Operator by entering the following command:

    $ oc get network.operator.openshift.io/cluster \
      -ojsonpath='{.spec.defaultNetwork.ovnKubernetesConfig.ipv4}' \
      --kubeconfig=<clusters-hostedclustername> | jq .
    Copy to Clipboard Toggle word wrap
  7. Get the IP address of pod 2 and transfer it from pod 1:

    1. Enter the following command:

      $ pod2_ip=oc get pod -n e2e-test-networking-ovnkubernetes-xxt8s <pod_2> -o=jsonpath={.status.podIPs[0].ip}
      Copy to Clipboard Toggle word wrap
    2. Enter the following command:

      $ oc exec <pod_1> -- /bin/sh -x -c curl --connect-timeout 5 -s <pod2_ip>:8080
      Copy to Clipboard Toggle word wrap
  8. Get the service IP address and verify that the pod can be visited externally from a service:

    1. Enter the following command:

      $ SERVICE_IP=oc get service test-service-o=jsonpath={.spec.clusterIPs[0]}
      Copy to Clipboard Toggle word wrap
    2. Enter the following command:

      $ oc exec <pod_1> -- /bin/sh -x -c curl --connect-timeout 5 -s $SERVICE_IP:<service_test_port_number>
      Copy to Clipboard Toggle word wrap

14.2. Proxy support for hosted control planes

To ensure that control-plane workloads, compute nodes, management clusters, and hosted clusters have the access they need for optimal performance, you can configure proxy support.

In standalone OpenShift Container Platform, the primary purposes of proxy support are ensuring that workloads in the cluster are configured to use the HTTP or HTTPS proxy to access external services, honoring the NO_PROXY setting if one is configured, and accepting any trust bundle that is configured for the proxy.

In hosted control planes, proxy support includes use cases beyond those in standalone OpenShift Container Platform.

Operators that run in the control plane need to access external services through the proxy that is configured for the hosted cluster. The proxy is usually accessible only through the data plane. The control plane workloads are as follows:

  • The Control Plane Operator needs to validate and obtain endpoints from certain identity providers when it creates the OAuth server configuration.
  • The OAuth server needs non-LDAP identity provider access.
  • The OpenShift API server handles image registry metadata import.
  • The Ingress Operator needs access to validate external canary routes.
  • You must open the firewall port 53 on Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) to allow the Domain Name Service (DNS) protocol to work as expected.

In a hosted cluster, you must send traffic that originates from the Control Plane Operator, Ingress Operator, OAuth server, and OpenShift API server pods through the data plane to the configured proxy and then to its final destination.

Note

Some operations are not possible when a hosted cluster is reduced to zero compute nodes; for example, when you import OpenShift image streams from a registry that requires proxy access.

When compute nodes need a proxy to access the ignition endpoint, you must configure the proxy in the user-data stub that is configured on the compute node when it is created. For cases where machines need a proxy to access the ignition URL, the proxy configuration is included in the stub.

The stub resembles the following example:

---
{"ignition":{"config":{"merge":[{"httpHeaders":[{"name":"Authorization","value":"Bearer ..."},{"name":"TargetConfigVersionHash","value":"a4c1b0dd"}],"source":"https://ignition.controlplanehost.example.com/ignition","verification":{}}],"replace":{"verification":{}}},"proxy":{"httpProxy":"http://proxy.example.org:3128", "httpsProxy":"https://proxy.example.org:3129", "noProxy":"host.example.org"},"security":{"tls":{"certificateAuthorities":[{"source":"...","verification":{}}]}},"timeouts":{},"version":"3.2.0"},"passwd":{},"storage":{},"systemd":{}}
---
Copy to Clipboard Toggle word wrap

This use case is relevant to self-managed hosted control planes, not to Red Hat OpenShift Service on AWS with hosted control planes.

For communication with the control plane, hosted control planes uses a local proxy in every compute node that listens on IP address 172.20.0.1 and forwards traffic to the API server. If an external proxy is required to access the API server, that local proxy needs to use the external proxy to send traffic out. When a proxy is not needed, hosted control planes uses haproxy for the local proxy, which only forwards packets via TCP. When a proxy is needed, hosted control planes uses a custom proxy, control-plane-operator-kubernetes-default-proxy, to send traffic through the external proxy.

The HyperShift Operator has a controller that monitors the OpenShift global proxy configuration of the management cluster and sets the proxy environment variables on its own deployment. Control plane deployments that need external access are configured with the proxy environment variables of the management cluster.

If a management cluster uses a proxy configuration and you are configuring a hosted cluster with a secondary network but are not attaching the default pod network, add the CIDR of the secondary network to the proxy configuration. Specifically, you need to add the CIDR of the secondary network to the noProxy section of the proxy configuration for the management cluster. Otherwise, the Kubernetes API server will route some API requests through the proxy. In the hosted cluster configuration, the CIDR of the secondary network is automatically added to the noProxy section.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top