Este conteúdo não está disponível no idioma selecionado.
Chapter 10. Network configuration
The following sections describe the basics of network configuration with the Assisted Installer.
10.1. Cluster networking Copiar o linkLink copiado para a área de transferência!
There are various network types and addresses used by OpenShift and listed in the following table.
IPv6 is not currently supported in the following configurations:
- Single stack
- Primary within dual stack
Type | DNS | Description |
---|---|---|
| The IP address pools from which pod IP addresses are allocated. | |
| The IP address pool for services. | |
| The IP address blocks for machines forming the cluster. | |
|
| The VIP to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address. |
|
|
The VIPs to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the |
|
| The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address. |
|
|
The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the |
OpenShift Container Platform 4.12 introduces the new apiVIPs
and ingressVIPs
settings to accept many IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP
and IngressVIP
, but you must set both the new and old settings when modifying the configuration by using the API.
Currently, the Assisted Service can deploy OpenShift Container Platform clusters by using one of the following configurations:
- IPv4
- Dual-stack (IPv4 + IPv6 with IPv4 as primary)
OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases.
10.1.1. Limitations Copiar o linkLink copiado para a área de transferência!
Cluster networking has the following limitations.
- SDN
- The SDN controller is not supported with single-node OpenShift.
- The SDN controller does not support dual-stack networking.
- The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes.
- OVN-Kubernetes
- For more information, see About the OVN-Kubernetes network plugin.
10.1.2. Cluster network Copiar o linkLink copiado para a área de transferência!
The cluster network is a network from which every pod deployed in the cluster gets its IP address. Given that the workload might live across many nodes forming the cluster, it is important for the network provider to be able to easily find an individual node based on the pod’s IP address. To do this, clusterNetwork.cidr
is further split into subnets of the size defined in clusterNetwork.hostPrefix
.
The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster might assign addresses for the multi-node cluster:
clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
Creating a 3-node cluster by using this snippet might create the following network topology:
-
Pods scheduled in node #1 get IPs from
10.128.0.0/23
-
Pods scheduled in node #2 get IPs from
10.128.2.0/23
-
Pods scheduled in node #3 get IPs from
10.128.4.0/23
Explaining OVN-Kubernetes internals is out of scope for this document, but the pattern previously described provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes.
10.1.3. Machine network Copiar o linkLink copiado para a área de transferência!
Machine networks are IP networks that connect all the cluster nodes within OpenShift Container Platform.
The Assisted Installer supports a single machine network for most cluster installations. In such cases, the Assisted Installer automatically determines the appropriate machine network based on the API and Ingress virtual IPs (VIPs) that you specify.
The Assisted Installer supports two machine networks in the following scenarios:
- For dual stack configurations, the Assisted Installer automatically allocates two machine networks, based on the IPv4 and IPv6 subnets and the API and Ingress VIPs that you specify.
- For iSCSI boot volumes, the hosts are automatically connected over two machine networks: one designated for the OpenShift Container Platform installation and the other for iSCSI traffic. During the installation process, ensure that you select the OpenShift Container Platform network. Using the iSCSI network will result in an error for the host.
The Assisted Installer supports multiple machine networks for the "cluster-managed networking with a user-managed load balancer" network management type. When installing this network management type, you must manually define the machine networks in the API cluster definitions, with the following conditions:
- Each node must have at least one network interface in at least one machine network.
- The load balancer IPs (VIPs) should be included in at least one of the machine networks.
Currently, you can install cluster-managed networking with a user-managed load balancer using the Assisted Installer API only.
10.1.4. Single-node OpenShift compared to multi-node cluster Copiar o linkLink copiado para a área de transferência!
Depending on whether you are deploying single-node OpenShift or a multi-node cluster, different values are mandatory. The following table explains this in more detail.
Parameter | Single-node OpenShift | Multi-node cluster with DHCP mode | Multi-node cluster without DHCP mode |
---|---|---|---|
| Required | Required | Required |
| Required | Required | Required |
| Auto-assign possible (*) | Auto-assign possible (*) | Auto-assign possible (*) |
| Forbidden | Forbidden | Required |
| Forbidden | Forbidden | Required in 4.12 and later releases |
| Forbidden | Forbidden | Required |
| Forbidden | Forbidden | Required in 4.12 and later releases |
(*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly.
10.1.5. Air-gapped environments Copiar o linkLink copiado para a área de transferência!
The workflow for deploying a cluster without Internet access has some prerequisites, which are out of scope of this document. You can consult the Zero Touch Provisioning the hard way Git repository for some insights.
10.2. VIP DHCP allocation Copiar o linkLink copiado para a área de transferência!
The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server.
If you enable the VIP DHCP allocation feature, the service will not use the api_vips
and ingress_vips
defined in the cluster configuration. Instead, it will request IP addresses from the DHCP server on the machine network and use the assigned VIPs accordingly.
Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier. For a more detailed explanation of the syntax for the VIP addresses, see "Additional resources".
VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later.
10.2.1. Enabling VIP DHCP allocation Copiar o linkLink copiado para a área de transferência!
You can enable automatic VIP allocation through DHCP.
Procedure
- Follow the instructions for registering a new cluster by using the API. For details, see Registering a new cluster.
Add the following payload settings to the cluster configuration:
-
Set
vip_dhcp_allocation
totrue
. -
Set
network_type
toOpenShiftSDN
. -
Include your network configurations for
cluster_networks
,service_networks
, andmachine_networks
.
Example payload to enable autoallocation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Set
Submit the payload to the Assisted Service API to apply the configuration, by running the following command:
curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/<cluster-id>" \ -d @./payload.json \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.id'
$ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/<cluster-id>" \ -d @./payload.json \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.id'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.2. Disabling VIP DHCP allocation Copiar o linkLink copiado para a área de transferência!
If you want to manually control your VIP assignments, you can disable VIP DHCP allocation.
Procedure
- Follow the instructions for registering a new cluster by using the API. For details, see Registering a new cluster.
Add the following payload settings to the cluster configuration:
-
Set
vip_dhcp_allocation
tofalse
. -
Specify the IP addresses for
api_vips
andingress_vips
. You can take these IPs from yourmachine_networks
configuration. -
Set
network_type
toOVNKubernetes
,OpenShiftSDN
, or another supported SDN type, if applicable. -
Include your network configurations for
cluster_networks
andservice_networks
.
Example payload to disable autoallocation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Set
Submit the payload to the Assisted Service API to apply the configuration, by running the following command:
curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/<cluster-id>" \ -d @./payload.json \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.id'
$ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/<cluster-id>" \ -d @./payload.json \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.id'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3. Network management types Copiar o linkLink copiado para a área de transferência!
The Assisted Installer supports the following network management types:
- Cluster-managed networking
Cluster-managed networking is the default option for deploying OpenShift Container Platform clusters. It minimizes user intervention by automatically provisioning and managing key network components.
The main characteristics of cluster-managed networking are the following:
- Integrates automated load balancing and virtual routing for managing the Virtual IP (VIP) addresses to ensure redundancy.
- Automatically supports an extensive internal DNS (CoreDNS) for service discovery.
- Hosts all control plane nodes within a single, contiguous subnet, simplifying routing and connectivity within the cluster.
- Supports the installation of platform-specific features such as the Bare Metal Operator for bare metal.
- Available for clusters with three or more control plane nodes; not available for single-node OpenShift.
You can configure cluster-managed networking both the web console or API. If you do not define a network management type, the Assisted Installer applies cluster-managed networking automatically for highly available clusters.
- User-managed networking
User-managed networking allows customers with custom or non-standard network topologies to deploy OpenShift Container Platform clusters. It provides control and flexibility, allowing you to integrate OpenShift Container Platform with existing and complex network infrastructures.
The main characteristics of user-managed networking are the following:
- Allows users to configure one or more external load balancers for handling API and Ingress IP addresses.
- Enables control plane nodes to span multiple subnets.
- Can be deployed on both single-node OpenShift and high-availability clusters.
You can configure user-managed networking in both the Assisted Installer web console or API.
- Cluster managed networking with a user-managed load balancer
Cluster-managed networking with a user-managed load balancer is a hybrid network management type designed for scenarios that require automated cluster networking with external control over load balancing.
This approach combines elements from both cluster-managed and user-managed networking. The main characteristics of this network management type are as follows:
- Allows users to configure one or more external load balancers for handling API and Ingress IP addresses.
- Automatically supports an extensive internal DNS (CoreDNS) for service discovery.
- Enables control plane nodes to span multiple subnets.
- Supports the installation of platform specific features such as the Bare Metal Operator for bare metal.
- Provides high fault tolerance and disaster recovery for the control plane nodes.
The Assisted Installer supports cluster-managed networking with a user-managed load balancer on a bare-metal or vSphere platform. Currently you can configure this network management type through the API only.
Cluster-managed networking with a user-managed load balancer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
10.4. Static network configuration Copiar o linkLink copiado para a área de transferência!
You may use static network configurations when generating or updating the discovery ISO.
Prerequisites
- You are familiar with NMState.
10.4.1. NMState configuration Copiar o linkLink copiado para a área de transferência!
The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time.
Example of NMState configuration
10.4.2. MAC interface mapping Copiar o linkLink copiado para a área de transferência!
MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host.
The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces.
Example of MAC interface mapping
10.4.3. Additional NMState configuration examples Copiar o linkLink copiado para a área de transferência!
The following examples are only meant to show a partial configuration. They are not meant for use as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they can leave your machines with no network connectivity.
Tagged VLAN example
Network bond example
10.5. Converting to dual-stack networking Copiar o linkLink copiado para a área de transferência!
Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets.
10.5.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
- You are familiar with OVN-K8s documentation
10.5.2. Example payload for single-node OpenShift Copiar o linkLink copiado para a área de transferência!
10.5.3. Example payload for an OpenShift Container Platform cluster consisting of many nodes Copiar o linkLink copiado para a área de transferência!
10.5.4. Limitations Copiar o linkLink copiado para a área de transferência!
The api_vips
IP address and ingress_vips
IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values.