Este conteúdo não está disponível no idioma selecionado.

Chapter 10. Network configuration


The following sections describe the basics of network configuration with the Assisted Installer.

10.1. Cluster networking

There are various network types and addresses used by OpenShift and listed in the following table.

Important

IPv6 is not currently supported in the following configurations:

  • Single stack
  • Primary within dual stack
Expand
TypeDNSDescription

clusterNetwork

 

The IP address pools from which pod IP addresses are allocated.

serviceNetwork

 

The IP address pool for services.

machineNetwork

 

The IP address blocks for machines forming the cluster.

apiVIP

api.<clustername.clusterdomain>

The VIP to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address.

apiVIPs

api.<clustername.clusterdomain>

The VIPs to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the apiVIP setting.

ingressVIP

*.apps.<clustername.clusterdomain>

The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address.

ingressVIPs

*.apps.<clustername.clusterdomain>

The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the ingressVIP setting.

Note

OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept many IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP, but you must set both the new and old settings when modifying the configuration by using the API.

Currently, the Assisted Service can deploy OpenShift Container Platform clusters by using one of the following configurations:

  • IPv4
  • Dual-stack (IPv4 + IPv6 with IPv4 as primary)
Note

OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases.

10.1.1. Limitations

Cluster networking has the following limitations.

SDN
  • The SDN controller is not supported with single-node OpenShift.
  • The SDN controller does not support dual-stack networking.
  • The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes.
OVN-Kubernetes
For more information, see About the OVN-Kubernetes network plugin.

10.1.2. Cluster network

The cluster network is a network from which every pod deployed in the cluster gets its IP address. Given that the workload might live across many nodes forming the cluster, it is important for the network provider to be able to easily find an individual node based on the pod’s IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix.

The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster might assign addresses for the multi-node cluster:

  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
Copy to Clipboard Toggle word wrap

Creating a 3-node cluster by using this snippet might create the following network topology:

  • Pods scheduled in node #1 get IPs from 10.128.0.0/23
  • Pods scheduled in node #2 get IPs from 10.128.2.0/23
  • Pods scheduled in node #3 get IPs from 10.128.4.0/23

Explaining OVN-Kubernetes internals is out of scope for this document, but the pattern previously described provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes.

10.1.3. Machine network

Machine networks are IP networks that connect all the cluster nodes within OpenShift Container Platform.

The Assisted Installer supports a single machine network for most cluster installations. In such cases, the Assisted Installer automatically determines the appropriate machine network based on the API and Ingress virtual IPs (VIPs) that you specify.

The Assisted Installer supports two machine networks in the following scenarios:

  • For dual stack configurations, the Assisted Installer automatically allocates two machine networks, based on the IPv4 and IPv6 subnets and the API and Ingress VIPs that you specify.
  • For iSCSI boot volumes, the hosts are automatically connected over two machine networks: one designated for the OpenShift Container Platform installation and the other for iSCSI traffic. During the installation process, ensure that you select the OpenShift Container Platform network. Using the iSCSI network will result in an error for the host.

The Assisted Installer supports multiple machine networks for the "cluster-managed networking with a user-managed load balancer" network management type. When installing this network management type, you must manually define the machine networks in the API cluster definitions, with the following conditions:

  • Each node must have at least one network interface in at least one machine network.
  • The load balancer IPs (VIPs) should be included in at least one of the machine networks.

Currently, you can install cluster-managed networking with a user-managed load balancer using the Assisted Installer API only.

10.1.4. Single-node OpenShift compared to multi-node cluster

Depending on whether you are deploying single-node OpenShift or a multi-node cluster, different values are mandatory. The following table explains this in more detail.

Expand
ParameterSingle-node OpenShiftMulti-node cluster with DHCP modeMulti-node cluster without DHCP mode

clusterNetwork

Required

Required

Required

serviceNetwork

Required

Required

Required

machineNetwork

Auto-assign possible (*)

Auto-assign possible (*)

Auto-assign possible (*)

apiVIP

Forbidden

Forbidden

Required

apiVIPs

Forbidden

Forbidden

Required in 4.12 and later releases

ingressVIP

Forbidden

Forbidden

Required

ingressVIPs

Forbidden

Forbidden

Required in 4.12 and later releases

(*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly.

10.1.5. Air-gapped environments

The workflow for deploying a cluster without Internet access has some prerequisites, which are out of scope of this document. You can consult the Zero Touch Provisioning the hard way Git repository for some insights.

10.2. VIP DHCP allocation

The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server.

If you enable the VIP DHCP allocation feature, the service will not use the api_vips and ingress_vips defined in the cluster configuration. Instead, it will request IP addresses from the DHCP server on the machine network and use the assigned VIPs accordingly.

Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier. For a more detailed explanation of the syntax for the VIP addresses, see "Additional resources".

Important

VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later.

10.2.1. Enabling VIP DHCP allocation

You can enable automatic VIP allocation through DHCP.

Procedure

  1. Follow the instructions for registering a new cluster by using the API. For details, see Registering a new cluster.
  2. Add the following payload settings to the cluster configuration:

    1. Set vip_dhcp_allocation to true.
    2. Set network_type to OpenShiftSDN.
    3. Include your network configurations for cluster_networks, service_networks, and machine_networks.

    Example payload to enable autoallocation

    $ cat << EOF > payload.json
    
    {
      "vip_dhcp_allocation": true,
      "network_type": "OpenShiftSDN",
      "user_managed_networking": false,
      "cluster_networks": [
        {
          "cidr": "10.128.0.0/14",
          "host_prefix": 23
        }
      ],
      "service_networks": [
        {
          "cidr": "172.30.0.0/16"
        }
      ],
      "machine_networks": [
        {
          "cidr": "192.168.127.0/24"
        }
      ]
    }
    
    EOF
    Copy to Clipboard Toggle word wrap

  3. Submit the payload to the Assisted Service API to apply the configuration, by running the following command:

    $ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/<cluster-id>" \
      -d @./payload.json \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $API_TOKEN" \
      | jq '.id'
    Copy to Clipboard Toggle word wrap

10.2.2. Disabling VIP DHCP allocation

If you want to manually control your VIP assignments, you can disable VIP DHCP allocation.

Procedure

  1. Follow the instructions for registering a new cluster by using the API. For details, see Registering a new cluster.
  2. Add the following payload settings to the cluster configuration:

    1. Set vip_dhcp_allocation to false.
    2. Specify the IP addresses for api_vips and ingress_vips. You can take these IPs from your machine_networks configuration.
    3. Set network_type to OVNKubernetes, OpenShiftSDN, or another supported SDN type, if applicable.
    4. Include your network configurations for cluster_networks and service_networks.

    Example payload to disable autoallocation

    $ cat << EOF > payload.json
    
    {
      "api_vips": [
        {
            "ip": "192.168.127.100"
        }
      ],
      "ingress_vips": [
        {
            "ip": "192.168.127.101"
        }
      ],
      "vip_dhcp_allocation": false,
      "network_type": "OVNKubernetes",
      "user_managed_networking": false,
      "cluster_networks": [
        {
          "cidr": "10.128.0.0/14",
          "host_prefix": 23
        }
      ],
      "service_networks": [
        {
          "cidr": "172.30.0.0/16"
        }
      ]
    }
    
    EOF
    Copy to Clipboard Toggle word wrap

  3. Submit the payload to the Assisted Service API to apply the configuration, by running the following command:

    $ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/<cluster-id>" \
      -d @./payload.json \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $API_TOKEN" \
      | jq '.id'
    Copy to Clipboard Toggle word wrap

10.3. Network management types

The Assisted Installer supports the following network management types:

Cluster-managed networking

Cluster-managed networking is the default option for deploying OpenShift Container Platform clusters. It minimizes user intervention by automatically provisioning and managing key network components.

The main characteristics of cluster-managed networking are the following:

  • Integrates automated load balancing and virtual routing for managing the Virtual IP (VIP) addresses to ensure redundancy.
  • Automatically supports an extensive internal DNS (CoreDNS) for service discovery.
  • Hosts all control plane nodes within a single, contiguous subnet, simplifying routing and connectivity within the cluster.
  • Supports the installation of platform-specific features such as the Bare Metal Operator for bare metal.
  • Available for clusters with three or more control plane nodes; not available for single-node OpenShift.

You can configure cluster-managed networking both the web console or API. If you do not define a network management type, the Assisted Installer applies cluster-managed networking automatically for highly available clusters.

User-managed networking

User-managed networking allows customers with custom or non-standard network topologies to deploy OpenShift Container Platform clusters. It provides control and flexibility, allowing you to integrate OpenShift Container Platform with existing and complex network infrastructures.

The main characteristics of user-managed networking are the following:

  • Allows users to configure one or more external load balancers for handling API and Ingress IP addresses.
  • Enables control plane nodes to span multiple subnets.
  • Can be deployed on both single-node OpenShift and high-availability clusters.

You can configure user-managed networking in both the Assisted Installer web console or API.

Cluster managed networking with a user-managed load balancer

Cluster-managed networking with a user-managed load balancer is a hybrid network management type designed for scenarios that require automated cluster networking with external control over load balancing.

This approach combines elements from both cluster-managed and user-managed networking. The main characteristics of this network management type are as follows:

  • Allows users to configure one or more external load balancers for handling API and Ingress IP addresses.
  • Automatically supports an extensive internal DNS (CoreDNS) for service discovery.
  • Enables control plane nodes to span multiple subnets.
  • Supports the installation of platform specific features such as the Bare Metal Operator for bare metal.
  • Provides high fault tolerance and disaster recovery for the control plane nodes.

The Assisted Installer supports cluster-managed networking with a user-managed load balancer on a bare-metal or vSphere platform. Currently you can configure this network management type through the API only.

Important

Cluster-managed networking with a user-managed load balancer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

10.4. Static network configuration

You may use static network configurations when generating or updating the discovery ISO.

Prerequisites

10.4.1. NMState configuration

The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time.

Example of NMState configuration

dns-resolver:
  config:
    server:
    - 192.168.126.1
interfaces:
- ipv4:
    address:
    - ip: 192.168.126.30
      prefix-length: 24
    dhcp: false
    enabled: true
  name: eth0
  state: up
  type: ethernet
- ipv4:
    address:
    - ip: 192.168.141.30
      prefix-length: 24
    dhcp: false
    enabled: true
  name: eth1
  state: up
  type: ethernet
routes:
  config:
  - destination: 0.0.0.0/0
    next-hop-address: 192.168.126.1
    next-hop-interface: eth0
    table-id: 254
Copy to Clipboard Toggle word wrap

10.4.2. MAC interface mapping

MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host.

The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces.

Example of MAC interface mapping

mac_interface_map: [
    {
      mac_address: 02:00:00:2c:23:a5,
      logical_nic_name: eth0
    },
    {
      mac_address: 02:00:00:68:73:dc,
      logical_nic_name: eth1
    }
]
Copy to Clipboard Toggle word wrap

10.4.3. Additional NMState configuration examples

The following examples are only meant to show a partial configuration. They are not meant for use as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they can leave your machines with no network connectivity.

Tagged VLAN example

interfaces:
  - ipv4:
      address:
      - ip: 192.168.143.15
        prefix-length: 24
      dhcp: false
      enabled: true
    ipv6:
      enabled: false
    name: eth0.404
    state: up
    type: vlan
    vlan:
      base-iface: eth0
      id: 404
      reorder-headers: true
Copy to Clipboard Toggle word wrap

Network bond example

interfaces:
- ipv4:
    address:
    - ip: 192.168.138.15
      prefix-length: 24
    dhcp: false
    enabled: true
    ipv6:
      enabled: false
    link-aggregation:
      mode: active-backup
      options:
        miimon: "140"
      port:
      - eth0
      - eth1
    name: bond0
    state: up
    type: bond
Copy to Clipboard Toggle word wrap

10.5. Converting to dual-stack networking

Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets.

10.5.1. Prerequisites

10.5.2. Example payload for single-node OpenShift

{
  "network_type": "OVNKubernetes",
  "user_managed_networking": false,
  "cluster_networks": [
    {
      "cidr": "10.128.0.0/14",
      "host_prefix": 23
    },
    {
      "cidr": "fd01::/48",
      "host_prefix": 64
    }
  ],
  "service_networks": [
    {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"}
  ],
  "machine_networks": [
    {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"}
  ]
}
Copy to Clipboard Toggle word wrap
{
  "vip_dhcp_allocation": false,
  "network_type": "OVNKubernetes",
  "user_managed_networking": false,
  "api_vips": [
     {
        "ip": "192.168.127.100"
     },
     {
        "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334"
     }
  ],
  "ingress_vips": [
     {
        "ip": "192.168.127.101"
     },
     {
        "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335"
     }
  ],
  "cluster_networks": [
    {
      "cidr": "10.128.0.0/14",
      "host_prefix": 23
    },
    {
      "cidr": "fd01::/48",
      "host_prefix": 64
    }
  ],
  "service_networks": [
    {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"}
  ],
  "machine_networks": [
    {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"}
  ]
}
Copy to Clipboard Toggle word wrap

10.5.4. Limitations

The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values.

Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat