Chapter 11. Network configuration
The following sections describe the basics of network configuration with the Assisted Installer.
11.1. Cluster networking
There are various network types and addresses used by OpenShift and listed in the following table.
IPv6 is not currently supported in the following configurations:
- Single stack
- Primary within dual stack
Type | DNS | Description |
---|---|---|
| The IP address pools from which pod IP addresses are allocated. | |
| The IP address pool for services. | |
| The IP address blocks for machines forming the cluster. | |
|
| The VIP to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address. |
|
|
The VIPs to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the |
|
| The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address. |
|
|
The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the |
OpenShift Container Platform 4.12 introduces the new apiVIPs
and ingressVIPs
settings to accept many IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP
and IngressVIP
, but you must set both the new and old settings when modifying the configuration by using the API.
Currently, the Assisted Service can deploy OpenShift Container Platform clusters by using one of the following configurations:
- IPv4
- Dual-stack (IPv4 + IPv6 with IPv4 as primary)
OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases.
11.1.1. Limitations
11.1.1.1. SDN
- The SDN controller is not supported with single-node OpenShift.
- The SDN controller does not support dual-stack networking.
- The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes.
11.1.1.2. OVN-Kubernetes
For more information, see About the OVN-Kubernetes network plugin.
11.1.2. Cluster network
The cluster network is a network from which every pod deployed in the cluster gets its IP address. Given that the workload might live across many nodes forming the cluster, it is important for the network provider to be able to easily find an individual node based on the pod’s IP address. To do this, clusterNetwork.cidr
is further split into subnets of the size defined in clusterNetwork.hostPrefix
.
The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster might assign addresses for the multi-node cluster:
clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
Creating a 3-node cluster by using this snippet might create the following network topology:
-
Pods scheduled in node #1 get IPs from
10.128.0.0/23
-
Pods scheduled in node #2 get IPs from
10.128.2.0/23
-
Pods scheduled in node #3 get IPs from
10.128.4.0/23
Explaining OVN-Kubernetes internals is out of scope for this document, but the pattern previously described provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes.
11.1.3. Machine network
The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs.
For iSCSI boot volumes, the hosts are connected over two machine networks: one designated for the OpenShift Container Platform installation and the other for iSCSI traffic. During the installation process, ensure that you specify the OpenShift Container Platform network. Using the iSCSI network will result in an error for the host.
11.1.4. Single-node OpenShift compared to multi-node cluster
Depending on whether you are deploying single-node OpenShift or a multi-node cluster, different values are mandatory. The following table explains this in more detail.
Parameter | Single-node OpenShift | Multi-node cluster with DHCP mode | Multi-node cluster without DHCP mode |
---|---|---|---|
| Required | Required | Required |
| Required | Required | Required |
| Auto-assign possible (*) | Auto-assign possible (*) | Auto-assign possible (*) |
| Forbidden | Forbidden | Required |
| Forbidden | Forbidden | Required in 4.12 and later releases |
| Forbidden | Forbidden | Required |
| Forbidden | Forbidden | Required in 4.12 and later releases |
(*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly.
11.1.5. Air-gapped environments
The workflow for deploying a cluster without Internet access has some prerequisites, which are out of scope of this document. You can consult the Zero Touch Provisioning the hard way Git repository for some insights.
11.2. VIP DHCP allocation
The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server.
If you enable the feature, instead of using api_vips
and ingress_vips
from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network.
Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier.
VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later.
11.2.1. Example payload to enable autoallocation
{ "vip_dhcp_allocation": true, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ], "machine_networks": [ { "cidr": "192.168.127.0/24" } ] }
{
"vip_dhcp_allocation": true,
"network_type": "OVNKubernetes",
"user_managed_networking": false,
"cluster_networks": [
{
"cidr": "10.128.0.0/14",
"host_prefix": 23
}
],
"service_networks": [
{
"cidr": "172.30.0.0/16"
}
],
"machine_networks": [
{
"cidr": "192.168.127.0/24"
}
]
}
11.2.2. Example payload to disable autoallocation
{ "api_vips": [ { "ip": "192.168.127.100" } ], "ingress_vips": [ { "ip": "192.168.127.101" } ], "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ] }
{
"api_vips": [
{
"ip": "192.168.127.100"
}
],
"ingress_vips": [
{
"ip": "192.168.127.101"
}
],
"vip_dhcp_allocation": false,
"network_type": "OVNKubernetes",
"user_managed_networking": false,
"cluster_networks": [
{
"cidr": "10.128.0.0/14",
"host_prefix": 23
}
],
"service_networks": [
{
"cidr": "172.30.0.0/16"
}
]
}
11.3. Additional resources
- Bare metal IPI documentation provides additional explanation of the syntax for the VIP addresses.
11.4. Understanding differences between user- and cluster-managed networking
User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include:
-
Customers with an external load balancer who do not want to use
keepalived
and VRRP for handling VIP addressses. - Deployments with cluster nodes distributed across many distinct L2 network segments.
11.4.1. Validations
There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change:
- The L3 connectivity check (ICMP) is performed instead of the L2 check (ARP).
- The MTU validation verifies the maximum transmission unit (MTU) value for all interfaces and not only for the machine network.
11.5. Static network configuration
You may use static network configurations when generating or updating the discovery ISO.
11.5.1. Prerequisites
- You are familiar with NMState.
11.5.2. NMState configuration
The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time.
11.5.2.1. Example of NMState configuration
dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.126.1 next-hop-interface: eth0 table-id: 254
dns-resolver:
config:
server:
- 192.168.126.1
interfaces:
- ipv4:
address:
- ip: 192.168.126.30
prefix-length: 24
dhcp: false
enabled: true
name: eth0
state: up
type: ethernet
- ipv4:
address:
- ip: 192.168.141.30
prefix-length: 24
dhcp: false
enabled: true
name: eth1
state: up
type: ethernet
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.126.1
next-hop-interface: eth0
table-id: 254
11.5.3. MAC interface mapping
MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host.
The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces.
11.5.3.1. Example of MAC interface mapping
mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ]
mac_interface_map: [
{
mac_address: 02:00:00:2c:23:a5,
logical_nic_name: eth0
},
{
mac_address: 02:00:00:68:73:dc,
logical_nic_name: eth1
}
]
11.5.4. Additional NMState configuration examples
The following examples are only meant to show a partial configuration. They are not meant for use as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they can leave your machines with no network connectivity.
11.5.4.1. Tagged VLAN
interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 reorder-headers: true
interfaces:
- ipv4:
address:
- ip: 192.168.143.15
prefix-length: 24
dhcp: false
enabled: true
ipv6:
enabled: false
name: eth0.404
state: up
type: vlan
vlan:
base-iface: eth0
id: 404
reorder-headers: true
11.5.4.2. Network bond
interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: miimon: "140" port: - eth0 - eth1 name: bond0 state: up type: bond
interfaces:
- ipv4:
address:
- ip: 192.168.138.15
prefix-length: 24
dhcp: false
enabled: true
ipv6:
enabled: false
link-aggregation:
mode: active-backup
options:
miimon: "140"
port:
- eth0
- eth1
name: bond0
state: up
type: bond
11.6. Applying a static network configuration with the API
You can apply a static network configuration by using the Assisted Installer API.
A static IP configuration is not supported in the following scenarios:
- OpenShift Container Platform installations on Oracle Cloud Infrastructure.
- OpenShift Container Platform installations on iSCSI boot volumes.
Prerequisites
- You have created an infrastructure environment using the API or have created a cluster using the web console.
-
You have your infrastructure environment ID exported in your shell as
$INFRA_ENV_ID
. -
You have credentials to use when accessing the API and have exported a token as
$API_TOKEN
in your shell. -
You have YAML files with a static network configuration available as
server-a.yaml
andserver-b.yaml
.
Procedure
Create a temporary file
/tmp/request-body.txt
with the API request:Copy to Clipboard Copied! Toggle word wrap Toggle overflow jq -n --arg NMSTATE_YAML1 "$(cat server-a.yaml)" --arg NMSTATE_YAML2 "$(cat server-b.yaml)" \ '{ "static_network_config": [ { "network_yaml": $NMSTATE_YAML1, "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}] }, { "network_yaml": $NMSTATE_YAML2, "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}] } ] }' >> /tmp/request-body.txt
jq -n --arg NMSTATE_YAML1 "$(cat server-a.yaml)" --arg NMSTATE_YAML2 "$(cat server-b.yaml)" \ '{ "static_network_config": [ { "network_yaml": $NMSTATE_YAML1, "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}] }, { "network_yaml": $NMSTATE_YAML2, "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}] } ] }' >> /tmp/request-body.txt
Refresh the API token:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow source refresh-token
$ source refresh-token
Send the request to the Assisted Service API endpoint:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer ${API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID
$ curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer ${API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID
11.7. Additional resources
11.8. Converting to dual-stack networking
Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets.
11.8.1. Prerequisites
- You are familiar with OVN-K8s documentation
11.8.2. Example payload for single-node OpenShift
{ "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] }
{
"network_type": "OVNKubernetes",
"user_managed_networking": false,
"cluster_networks": [
{
"cidr": "10.128.0.0/14",
"host_prefix": 23
},
{
"cidr": "fd01::/48",
"host_prefix": 64
}
],
"service_networks": [
{"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"}
],
"machine_networks": [
{"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"}
]
}
11.8.3. Example payload for an OpenShift Container Platform cluster consisting of many nodes
{ "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "api_vips": [ { "ip": "192.168.127.100" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334" } ], "ingress_vips": [ { "ip": "192.168.127.101" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335" } ], "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] }
{
"vip_dhcp_allocation": false,
"network_type": "OVNKubernetes",
"user_managed_networking": false,
"api_vips": [
{
"ip": "192.168.127.100"
},
{
"ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334"
}
],
"ingress_vips": [
{
"ip": "192.168.127.101"
},
{
"ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335"
}
],
"cluster_networks": [
{
"cidr": "10.128.0.0/14",
"host_prefix": 23
},
{
"cidr": "fd01::/48",
"host_prefix": 64
}
],
"service_networks": [
{"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"}
],
"machine_networks": [
{"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"}
]
}
11.8.4. Limitations
The api_vips
IP address and ingress_vips
IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values.