Chapter 10. Network configuration


The following sections describe the basics of network configuration with the Assisted Installer.

10.1. Cluster networking requirements

OpenShift Container Platform uses the network types and addresses listed in the following table.

Expand
TypeDNSDescription

clusterNetwork

 

The IP address pools from which pod IP addresses are allocated.

serviceNetwork

 

The IP address pool for services.

machineNetwork

 

The IP address blocks for machines that form the cluster. For a dual-stack configuration, specify both a primary and a secondary machineNetwork. The first network listed determines the primary stack, which can be either IPv4 or IPv6.

apiVIPs

api.<clustername.clusterdomain>

The VIPs to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. For dual-stack networking, specify one IPv4 address and one IPv6 address, with the primary address listed first.

ingressVIPs

*.apps.<clustername.clusterdomain>

The VIPs to use for ingress traffic. For dual-stack networking, specify one IPv4 address and one IPv6 address, with the primary address listed first.

Currently, the Assisted Service can deploy OpenShift Container Platform clusters by using one of the following configurations:

  • Single-stack IPv4. IPv6 is not currently supported in a single-stack configuration.
  • Dual-stack IPv4 and IPv6, with either address family as primary.

Open Virtual Network (OVN) is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. Software-Defined Networking (SDN) is supported up to OpenShift Container Platform 4.14 only.

Important

Support for IPv6 as the primary stack in a dual-stack configuration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

10.1.1. Networking Limitations

Cluster networking has the following limitations.

Software Defined Networking (SDN)
  • The SDN controller is not supported with single-node OpenShift.
  • The SDN controller does not support dual-stack networking.
  • The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes.
OVN-Kubernetes
For more information, see About the OVN-Kubernetes network plugin.

10.1.2. Cluster network

The cluster network is a network from which every pod deployed in the cluster gets its IP address. Given that the workload might live across many nodes forming the cluster, it is important for the network provider to be able to easily find an individual node based on the pod’s IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix.

The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster might assign addresses for the multi-node cluster:

  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

Creating a 3-node cluster by using this snippet might create the following network topology:

  • Pods scheduled in node #1 get IPs from 10.128.0.0/23
  • Pods scheduled in node #2 get IPs from 10.128.2.0/23
  • Pods scheduled in node #3 get IPs from 10.128.4.0/23

Explaining OVN-Kubernetes internals is out of scope for this document. However, the pattern described earlier provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes.

10.1.3. Machine network

Machine networks are IP networks that connect all the cluster nodes within OpenShift Container Platform.

The Assisted Installer supports a single machine network for most cluster installations. In such cases, the Assisted Installer automatically determines the appropriate machine network based on the API and Ingress virtual IPs (VIPs) that you specify.

The Assisted Installer supports two machine networks in the following scenarios:

  • For dual stack configurations, the Assisted Installer automatically allocates two machine networks, based on the IPv4 and IPv6 subnets and the API and Ingress VIPs that you specify.
  • For iSCSI boot volumes, the hosts are automatically connected over two machine networks: one designated for the OpenShift Container Platform installation and the other for iSCSI traffic. During the installation process, ensure that you select the OpenShift Container Platform network. Using the iSCSI network will result in an error for the host.

The Assisted Installer supports multiple machine networks for the "cluster-managed networking with a user-managed load balancer" network management type. When installing this network management type, you must manually define the machine networks in the API cluster definitions, with the following conditions:

  • Each node must have at least one network interface in at least one machine network.
  • The load balancer IPs (VIPs) should be included in at least one of the machine networks.

Currently, you can install cluster-managed networking with a user-managed load balancer using the Assisted Installer API only.

Depending on whether you are deploying single-node OpenShift or a multi-node OpenShift Container Platform cluster, different values are mandatory. The following table explains this in more detail.

Expand
ParameterSingle-node OpenShiftMulti-node cluster with DHCP modeMulti-node cluster without DHCP mode

clusterNetwork

Required

Required

Required

serviceNetwork

Required

Required

Required

machineNetwork

Auto-assign possible (*)

Auto-assign possible (*)

Auto-assign possible (*)

apiVIP

Forbidden

Forbidden

Required

apiVIPs

Forbidden

Forbidden

Required in 4.12 and later releases

ingressVIP

Forbidden

Forbidden

Required

ingressVIPs

Forbidden

Forbidden

Required in 4.12 and later releases

(*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly.

10.1.5. Air-gapped environments

The workflow for deploying a cluster without Internet access has some prerequisites, which are out of scope of this document. You can consult Zero Touch Provisioning the hard way Git repository for some insights.

10.2. Network management types

The Assisted Installer supports the following network management types.

For details on changing the network management type, see either of the following sections:

10.2.1. Cluster-managed networking

Cluster-managed networking is the default option for deploying OpenShift Container Platform clusters. It minimizes user intervention by automatically provisioning and managing key network components.

The main characteristics of cluster-managed networking are the following:

  • Integrates automated load balancing and virtual routing for managing the Virtual IP (VIP) addresses to ensure redundancy.
  • Automatically supports an extensive internal DNS (CoreDNS) for service discovery.
  • Hosts all control plane nodes within a single, contiguous subnet, simplifying routing and connectivity within the cluster.
  • Supports the installation of platform-specific features such as the Bare Metal Operator for bare metal.
  • Available for clusters with three or more control plane nodes; not available for single-node OpenShift.

You can configure cluster-managed networking both the web console or API. If you do not define a network management type, the Assisted Installer applies cluster-managed networking automatically for highly available clusters.

10.2.2. User-managed networking

User-managed networking allows customers with custom or non-standard network topologies to deploy OpenShift Container Platform clusters. It provides control and flexibility, allowing you to integrate OpenShift Container Platform with existing and complex network infrastructures.

The main characteristics of user-managed networking are the following:

  • Allows users to configure one or more external load balancers for handling API and Ingress IP addresses.
  • Enables control plane nodes to span multiple subnets.
  • Can be deployed on both single-node OpenShift and high-availability clusters.

You can configure user-managed networking in both the Assisted Installer web console or API.

Cluster-managed networking with a user-managed load balancer is a hybrid network management type designed for scenarios that require automated cluster networking with external control over load balancing.

This approach combines elements from both cluster-managed and user-managed networking. The main characteristics of this network management type are as follows:

  • Allows users to configure one or more external load balancers for handling API and Ingress IP addresses.
  • Automatically supports an extensive internal DNS (CoreDNS) for service discovery.
  • Enables control plane nodes to span multiple subnets.
  • Supports the installation of platform specific features such as the Bare Metal Operator for bare metal.
  • Provides high fault tolerance and disaster recovery for the control plane nodes.

The Assisted Installer supports cluster-managed networking with a user-managed load balancer on a bare-metal or vSphere platform. Currently you can configure this network management type through the API only.

Important

Cluster-managed networking with a user-managed load balancer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

10.3. VIP DHCP allocation

The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server.

If you enable the VIP DHCP allocation feature, the service will not use the api_vips and ingress_vips defined in the cluster configuration. Instead, it will request IP addresses from the DHCP server on the machine network and use the assigned VIPs accordingly.

Important

VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later.

10.3.1. Enabling VIP DHCP allocation

You can enable automatic VIP allocation through DHCP.

This is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier. For a more detailed explanation of the syntax for the VIP addresses, see Installer-provisioned infrastructure for a bare-metal installation.

Procedure

  1. Follow the instructions for registering a new cluster by using the API. For details, see Registering a new cluster.
  2. Add the following payload settings to the cluster configuration:

    1. Set vip_dhcp_allocation to true.
    2. Set network_type to OpenShiftSDN.
    3. Include your network configurations for cluster_networks, service_networks, and machine_networks.

    Example payload to enable auto-allocation:

    $ cat << EOF > payload.json
    
    {
      "vip_dhcp_allocation": true,
      "network_type": "OpenShiftSDN",
      "user_managed_networking": false,
      "cluster_networks": [
        {
          "cidr": "10.128.0.0/14",
          "host_prefix": 23
        }
      ],
      "service_networks": [
        {
          "cidr": "172.30.0.0/16"
        }
      ],
      "machine_networks": [
        {
          "cidr": "192.168.127.0/24"
        }
      ]
    }
    
    EOF
  3. Submit the payload to the Assisted Service API to apply the configuration, by running the following command:

    $ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/<cluster-id>" \
      -d @./payload.json \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $API_TOKEN" \
      | jq '.id'

10.3.2. Disabling VIP DHCP allocation

If you want to manually control your VIP assignments, you can disable VIP DHCP allocation.

Procedure

  1. Follow the instructions for registering a new cluster by using the API. For details, see Registering a new cluster.
  2. Add the following payload settings to the cluster configuration:

    1. Set vip_dhcp_allocation to false.
    2. Specify the IP addresses for api_vips and ingress_vips. You can take these IPs from your machine_networks configuration.
    3. Set network_type to OVNKubernetes, OpenShiftSDN, or another supported SDN type, if applicable.
    4. Include your network configurations for cluster_networks and service_networks.

    Example payload to disable autoallocation:

    $ cat << EOF > payload.json
    
    {
      "api_vips": [
        {
            "ip": "192.168.127.100"
        }
      ],
      "ingress_vips": [
        {
            "ip": "192.168.127.101"
        }
      ],
      "vip_dhcp_allocation": false,
      "network_type": "OVNKubernetes",
      "user_managed_networking": false,
      "cluster_networks": [
        {
          "cidr": "10.128.0.0/14",
          "host_prefix": 23
        }
      ],
      "service_networks": [
        {
          "cidr": "172.30.0.0/16"
        }
      ]
    }
    
    EOF
  3. Submit the payload to the Assisted Service API to apply the configuration, by running the following command:

    $ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/<cluster-id>" \
      -d @./payload.json \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $API_TOKEN" \
      | jq '.id'

10.4. Static network configuration

You can define static network configurations for each host through either the Assisted Installer web console or API. The Assisted Installer applies the settings to the Discovery ISO when you create a new ISO or update an existing one.

When using the API or the YAML view in the web console, create one or more NMState YAML files and map each host MAC address to its corresponding network interface name.

The Form view in the web console does not require these steps.

10.4.1. NMState configuration examples

The NMState YAML file specifies the required static network configuration for the host, including interface details, IP addresses, routes, and DNS settings. The Assisted Installer replaces the logical interface names (for example, eth0) with the actual names during host discovery.

The following examples show NMState YAML configurations that you can copy and adapt. For more examples, see the NMState documentation.

For details on applying the static networking configurations in the Assisted Installer, see Configuring static networks (web console) or Applying a static network configuration (API).

Standard NMState configuration example

This example shows a standard static network configuration with a default route and DNS server.

dns-resolver:
  config:
    server:
    - 192.168.126.1
interfaces:
- ipv4:
    address:
    - ip: 192.168.126.30
      prefix-length: 24
    dhcp: false
    enabled: true
  name: eth0
  state: up
  type: ethernet
- ipv4:
    address:
    - ip: 192.168.141.30
      prefix-length: 24
    dhcp: false
    enabled: true
  name: eth1
  state: up
  type: ethernet
routes:
  config:
  - destination: 0.0.0.0/0
    next-hop-address: 192.168.126.1
    next-hop-interface: eth0
    table-id: 254
Tagged VLAN example

Replace the relevant section of the standard YAML as follows to define a tagged VLAN interface on top of a physical network interface (NIC).

Important

This example and the next one show part of the YAML file only and are not meant for use as-is. Using them incorrectly can cause your machines to lose network connectivity.

interfaces:
  - ipv4:
      address:
      - ip: 192.168.143.15
        prefix-length: 24
      dhcp: false
      enabled: true
    ipv6:
      enabled: false
    name: eth0.404
    state: up
    type: vlan
    vlan:
      base-iface: eth0
      id: 404
      reorder-headers: true
Network bond example

Replace the relevant section of the standard YAML as follows to configure a network bond for redundancy by using the active-backup mode.

interfaces:
- ipv4:
    address:
    - ip: 192.168.138.15
      prefix-length: 24
    dhcp: false
    enabled: true
    ipv6:
      enabled: false
    link-aggregation:
      mode: active-backup
      options:
        miimon: "140"
      port:
      - eth0
      - eth1
    name: bond0
    state: up
    type: bond

10.4.2. MAC-to-NIC mapping examples

Each host requires a mapping between its MAC addresses and corresponding network interface cards (NICs). This mapping serves two main purposes:

  • To identify the correct node on which to apply YAML file.
  • To replace logical/temporary interface names, such as eth0 or ens3, in cases where the YAML file does not already use physical network interface names or identifier: mac-address.

You define the MAC-to-NIC mapping configurations in the NMState YAML file when using the Assisted Installer API for the installation.

If you are using the YAML view of the web console for the installation, this mapping is not required. Instead, you specify the mapping manually in the MAC to interface name mapping fields. For details, see Configuring static networks using YAML view (web console).

Example of MAC interface mapping with logical interface names

In this example, the mapping identifies the node and replaces the temporary interface name.

  • YAML file:

    dns-resolver:
      config:
        server:
        - 192.168.126.1
    interfaces:
    - ipv4:
        address:
        - ip: 192.168.126.30
          prefix-length: 24
        dhcp: false
        enabled: true
      name: eth0
      state: up
      type: ethernet
    - ipv4:
        address:
        - ip: 192.168.141.30
          prefix-length: 24
        dhcp: false
        enabled: true
      name: eth1
      state: up
      type: ethernet
    routes:
      config:
      - destination: 0.0.0.0/0
        next-hop-address: 192.168.126.1
        next-hop-interface: eth0
        table-id: 254
  • MAC mapping:

    mac_interface_map: [
        {
          mac_address: 02:00:00:2c:23:a5,
          logical_nic_name: eth0
        },
        {
          mac_address: 02:00:00:68:73:dc,
          logical_nic_name: eth1
        }
    ]
Example of MAC interface mapping with identifier: mac-address interface names

In this example, the NMState YAML configuration contains identifier: mac-address. This means the mapping only needs to specify a single MAC address to identify one node.

  • YAML file:

    dns-resolver:
      config:
        server:
        - 192.168.126.1
    interfaces:
      - name: eth0
        type: ethernet
        state: up
        identifier: mac-address
        mac-address: 1e:bd:23:e9:fb:94
        ipv4:
          enabled: true
          dhcp: true
        ipv6:
          enabled: true
          dhcp: true
          autoconf: true
    routes:
      config:
      - destination: 0.0.0.0/0
        next-hop-address: 192.168.126.1
        next-hop-interface: eth0
        table-id: 254
  • MAC mapping:

    mac_interface_map: [
        {
          mac_address: 1e:bd:23:e9:fb:95,
          logical_nic_name: test
        },
    ]

10.5. Converting to dual-stack networking

A dual-stack configuration enables clusters to host pods across both IPv4 and IPv6 subnets.

You configure dual-stack by specifying both IPv4 and IPv6 network address families in the configuration file. The order in which you list the IPv4 and IPv6 values determines the primary and secondary stack. The order must remain consistent across all networking parameters, including the machine network, cluster network, service network, API VIP, and Ingress VIP.

In the examples, listing the IPv4 network first makes it the primary stack, with IPv6 as the secondary stack. To set IPv6 as the primary stack, reverse the IPv4 and IPv6 values.

Important

Support for IPv6 as the primary stack in a dual-stack configuration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Before starting, ensure that you are familiar with Converting to IPv4/IPv6 dual stack networking.

Example payload for a single-node OpenShift cluster
{
  "network_type": "OVNKubernetes",
  "user_managed_networking": false,
  "cluster_networks": [
    {
      "cidr": "10.128.0.0/14",
      "host_prefix": 23
    },
    {
      "cidr": "fd01::/48",
      "host_prefix": 64
    }
  ],
  "service_networks": [
    {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"}
  ],
  "machine_networks": [
    {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"}
  ]
}
Example payload for a multi-node OpenShift cluster
{
  "vip_dhcp_allocation": false,
  "network_type": "OVNKubernetes",
  "user_managed_networking": false,
  "api_vips": [
     {
        "ip": "192.168.127.100"
     },
     {
        "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334"
     }
  ],
  "ingress_vips": [
     {
        "ip": "192.168.127.101"
     },
     {
        "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335"
     }
  ],
  "cluster_networks": [
    {
      "cidr": "10.128.0.0/14",
      "host_prefix": 23
    },
    {
      "cidr": "fd01::/48",
      "host_prefix": 64
    }
  ],
  "service_networks": [
    {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"}
  ],
  "machine_networks": [
    {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"}
  ]
}
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top