Search

Chapter 11. Network configuration

download PDF

This section describes the basics of network configuration using the Assisted Installer.

11.1. Cluster networking

There are various network types and addresses used by OpenShift and listed in the table below.

TypeDNSDescription

clusterNetwork

 

The IP address pools from which Pod IP addresses are allocated.

serviceNetwork

 

The IP address pool for services.

machineNetwork

 

The IP address blocks for machines forming the cluster.

apiVIP

api.<clustername.clusterdomain>

The VIP to use for API communication. This setting must either be provided or preconfigured in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address.

apiVIPs

api.<clustername.clusterdomain>

The VIPs to use for API communication. This setting must either be provided or preconfigured in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the apiVIP setting.

ingressVIP

*.apps.<clustername.clusterdomain>

The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address.

ingressVIPs

*.apps.<clustername.clusterdomain>

The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the ingressVIP setting.

Note

OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept multiple IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP, but you must set both the new and old settings when modifying the configuration using the API.

Depending on the desired network stack, you can choose different network controllers. Currently, the Assisted Service can deploy OpenShift Container Platform clusters using one of the following configurations:

  • IPv4
  • IPv6
  • Dual-stack (IPv4 + IPv6)

Supported network controllers depend on the selected stack and are summarized in the table below. For a detailed Container Network Interface (CNI) network provider feature comparison, refer to the OCP Networking documentation.

StackSDNOVN

IPv4

Yes

Yes

IPv6

No

Yes

Dual-stack

No

Yes

Note

OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases.

11.1.1. Limitations

11.1.1.1. SDN

  • The SDN controller is not supported with single-node OpenShift.
  • The SDN controller does not support IPv6.
  • The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes.

11.1.1.2. OVN-Kubernetes

Please see the OVN-Kubernetes limitations section in the OCP documentation.

11.1.2. Cluster network

The cluster network is a network from which every Pod deployed in the cluster gets its IP address. Given that the workload may live across many nodes forming the cluster, it’s important for the network provider to be able to easily find an individual node based on the Pod’s IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix.

The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster may assign addresses for the multi-node cluster:

---
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
---

Creating a 3-node cluster using the snippet above may create the following network topology:

  • Pods scheduled in node #1 get IPs from 10.128.0.0/23
  • Pods scheduled in node #2 get IPs from 10.128.2.0/23
  • Pods scheduled in node #3 get IPs from 10.128.4.0/23

Explaining OVN-K8s internals is out of scope for this document, but the pattern described above provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes.

11.1.3. Machine network

The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs.

11.1.4. SNO compared to multi-node cluster

Depending on whether you are deploying a Single Node OpenShift or a multi-node cluster, different values are mandatory. The table below explains this in more detail.

ParameterSNOMulti-Node Cluster with DHCP modeMulti-Node Cluster without DHCP mode

clusterNetwork

Required

Required

Required

serviceNetwork

Required

Required

Required

machineNetwork

Auto-assign possible (*)

Auto-assign possible (*)

Auto-assign possible (*)

apiVIP

Forbidden

Forbidden

Required

apiVIPs

Forbidden

Forbidden

Required in 4.12 and later releases

ingressVIP

Forbidden

Forbidden

Required

ingressVIPs

Forbidden

Forbidden

Required in 4.12 and later releases

(*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly.

11.1.5. Air-gapped environments

The workflow for deploying a cluster without Internet access has some prerequisites which are out of scope of this document. You may consult the Zero Touch Provisioning the hard way Git repository for some insights.

11.2. VIP DHCP allocation

The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server.

If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network.

Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier.

Important

VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later.

11.2.1. Example payload to enable autoallocation

---
{
  "vip_dhcp_allocation": true,
  "network_type": "OVNKubernetes",
  "user_managed_networking": false,
  "cluster_networks": [
    {
      "cidr": "10.128.0.0/14",
      "host_prefix": 23
    }
  ],
  "service_networks": [
    {
      "cidr": "172.30.0.0/16"
    }
  ],
  "machine_networks": [
    {
      "cidr": "192.168.127.0/24"
    }
  ]
}
---

11.2.2. Example payload to disable autoallocation

---
{
  "api_vips": [
    {
        "ip": "192.168.127.100"
    }
  ],
  "ingress_vips": [
    {
        "ip": "192.168.127.101"
    }
  ],
  "vip_dhcp_allocation": false,
  "network_type": "OVNKubernetes",
  "user_managed_networking": false,
  "cluster_networks": [
    {
      "cidr": "10.128.0.0/14",
      "host_prefix": 23
    }
  ],
  "service_networks": [
    {
      "cidr": "172.30.0.0/16"
    }
  ]
}
---

11.3. Additional resources

11.4. Understanding differences between user- and cluster-managed networking

User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include:

  • Customers with an external load balancer who do not want to use keepalived and VRRP for handling VIP addressses.
  • Deployments with cluster nodes distributed across many distinct L2 network segments.

11.4.1. Validations

There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change:

  • L3 connectivity check (ICMP) is performed instead of L2 check (ARP)

11.5. Static network configuration

You may use static network configurations when generating or updating the discovery ISO.

11.5.1. Prerequisites

11.5.2. NMState configuration

The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time.

11.5.2.1. Example of NMState configuration

---
dns-resolver:
  config:
    server:
    - 192.168.126.1
interfaces:
- ipv4:
    address:
    - ip: 192.168.126.30
      prefix-length: 24
    dhcp: false
    enabled: true
  name: eth0
  state: up
  type: ethernet
- ipv4:
    address:
    - ip: 192.168.141.30
      prefix-length: 24
    dhcp: false
    enabled: true
  name: eth1
  state: up
  type: ethernet
routes:
  config:
  - destination: 0.0.0.0/0
    next-hop-address: 192.168.126.1
    next-hop-interface: eth0
    table-id: 254
---

11.5.3. MAC interface mapping

MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host.

The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces.

11.5.3.1. Example of MAC interface mapping

---
mac_interface_map: [
    {
      mac_address: 02:00:00:2c:23:a5,
      logical_nic_name: eth0
    },
    {
      mac_address: 02:00:00:68:73:dc,
      logical_nic_name: eth1
    }
]
---

11.5.4. Additional NMState configuration examples

The examples below are only meant to show a partial configuration. They are not meant to be used as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they may leave your machines with no network connectivity.

11.5.4.1. Tagged VLAN

---
    interfaces:
    - ipv4:
        address:
        - ip: 192.168.143.15
          prefix-length: 24
        dhcp: false
        enabled: true
      ipv6:
        enabled: false
      name: eth0.404
      state: up
      type: vlan
      vlan:
        base-iface: eth0
        id: 404
        reorder-headers: true
---

11.5.4.2. Network bond

---
    interfaces:
    - ipv4:
        address:
        - ip: 192.168.138.15
          prefix-length: 24
        dhcp: false
        enabled: true
      ipv6:
        enabled: false
      link-aggregation:
        mode: active-backup
        options:
          all_slaves_active: delivered
          miimon: "140"
        slaves:
        - eth0
        - eth1
      name: bond0
      state: up
      type: bond
---

11.6. Applying a static network configuration with the API

You can apply a static network configuration using the Assisted Installer API.

Prerequisites

  1. You have created an infrastructure environment using the API or have created a cluster using the web console.
  2. You have your infrastructure environment ID exported in your shell as $INFRA_ENV_ID.
  3. You have credentials to use when accessing the API and have exported a token as $API_TOKEN in your shell.
  4. You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml.

Procedure

  1. Create a temporary file /tmp/request-body.txt with the API request:

    ---
    jq -n --arg NMSTATE_YAML1 "$(cat server-a.yaml)" --arg NMSTATE_YAML2 "$(cat server-b.yaml)" \
    '{
      "static_network_config": [
        {
          "network_yaml": $NMSTATE_YAML1,
          "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}]
        },
        {
          "network_yaml": $NMSTATE_YAML2,
          "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}]
         }
      ]
    }' >> /tmp/request-body.txt
    ---
  2. Refresh the API token:

    $ source refresh-token
  3. Send the request to the Assisted Service API endpoint:

    ---
    $ curl -H "Content-Type: application/json" \
    -X PATCH -d @/tmp/request-body.txt \
    -H "Authorization: Bearer ${API_TOKEN}" \
    https://api.openshift.com/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID
    ---

11.7. Additional resources

11.8. Converting to dual-stack networking

Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets.

11.8.1. Prerequisites

11.8.2. Example payload for Single Node OpenShift

---
{
  "network_type": "OVNKubernetes",
  "user_managed_networking": false,
  "cluster_networks": [
    {
      "cidr": "10.128.0.0/14",
      "host_prefix": 23
    },
    {
      "cidr": "fd01::/48",
      "host_prefix": 64
    }
  ],
  "service_networks": [
    {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"}
  ],
  "machine_networks": [
    {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"}
  ]
}
---

11.8.3. Example payload for an OpenShift Container Platform cluster consisting of many nodes

---
{
  "vip_dhcp_allocation": false,
  "network_type": "OVNKubernetes",
  "user_managed_networking": false,
  "api_vips": [
     {
        "ip": "192.168.127.100"
     },
     {
        "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334"
     }
  ],
  "ingress_vips": [
     {
        "ip": "192.168.127.101"
     },
     {
        "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335"
     }
  ],
  "cluster_networks": [
    {
      "cidr": "10.128.0.0/14",
      "host_prefix": 23
    },
    {
      "cidr": "fd01::/48",
      "host_prefix": 64
    }
  ],
  "service_networks": [
    {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"}
  ],
  "machine_networks": [
    {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"}
  ]
}
---

11.8.4. Limitations

The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values.

11.9. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.