Chapter 5. Planning your networks
Before you deploy RHOSO, take inventory of your networking requirements and the overall environment to inform your network design decisions.
5.1. Default physical networks Copy linkLink copied to clipboard!
Your Red Hat OpenStack Services on OpenShift (RHOSO) deployment uses a variety of networks on the control and data planes to provision, connect, manage, and operate OpenStack services, connect to external networks, and isolate project resources.
A RHOSO deployment includes the following networks:
- Control plane network
- Designate network (optional)
- Designate external network (optional)
- External network (optional)
- Internal API network
- Octavia network (optional)
- Storage management network (optional)
- Storage network
- Tenant (project) network
5.2. RHOSO network isolation Copy linkLink copied to clipboard!
Red Hat OpenStack Services on OpenShift (RHOSO) maps OpenStack services to isolated networks, so you must plan how your deployment hosts specific types of network traffic in isolation. This includes: planning IP ranges, subnets, and virtual IPs; deciding which IP protocol version to use; and configuring your NIC layout.
The RHOSO control plane services run as a Red Hat OpenShift Container Platform (RHOCP) workload. On the control plane, you use the NMState Operator to connect the worker nodes to the required isolated networks. You create a NetworkAttachmentDefinition (nad) custom resource (CR) for each isolated network to attach service pods to the isolated networks, where needed. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.
You must also create an L2Advertisement resource to define how the VIPs are announced, and an IpAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as Internal API, Storage, and External. Each network definition must include the IP address assignment.
5.3. IP version support for RHOSO networks Copy linkLink copied to clipboard!
You can use either IPv4 or IPv6 on almost any Red Hat OpenStack Services on OpenShift (RHOSO) physical network. For certain use cases, dual-stack (IPv4 and IPv6) is also supported.
There is one exception. After RHOSO deployment, external networks that use floating IPs must use IPv4 or dual-stack (IPv4 and IPv6). IPv6 uses Global Unicast Addresses (GUAs) instead of NAT and floating IP addresses. The Networking (neutron) service expects the IPv6 addressing between project networks to use GUAs, with no overlap in GUAs across the project networks, and therefore can be routed without NAT. With dual-stack, you can use floating IP addresses to only reach the IP addresses on IPv4 subnets.
In addition to supporting both IPv4 and IPv6, Provider, Provisioning, and Tenant (Project) networks also support dual-stack (IPv4 and IPv6) for the use cases described in the following table:
| Network | Use case | Supported IP versions |
|---|---|---|
| Provider | Networking | * Dual-stack * IPv6 * IPv4 |
| Provisioning | Bare Metal Provisioning service (ironic) on the data plane | * IPv6 |
| Provisioning | PXE and DHCP | * Dual-stack * IPv6 * IPv4 |
| Provisioning | IPMI, other BMC interfaces [1] | * Dual-stack * IPv6 * IPv4 |
| Tenant (Project)[2] | Networking | * Dual-stack * IPv6 * IPv4 |
| Tenant (Project) | Network endpoints [3] | * Dual-stack * IPv6 [4] * IPv4 |
[1] RHOSO communicates with baseboard management controller (BMC) interfaces over the Provisioning network. If BMC interfaces support IPv4, IPv6, or dual-stack, tools that are not part of RHOSO can use IPv6 to communicate with the BMCs.
[2] You configure tenant (project) networks after RHOSO deployment.
[3] Endpoints refer to the IP address of the network hosting the project network tunnels, not the project networks themselves.
[4] IPv6 for network endpoints supports only VxLAN and Geneve.
5.4. NICs Copy linkLink copied to clipboard!
A compact RHOSO deployment requires at least two NICs on each RHOSO control plane worker node.
One NIC on each worker node serves OpenShift. It provides connection between OpenShift components in the OpenShift cluster network.
The other NIC serves OpenStack. It connects the OpenStack services running on the worker nodes to the isolated networks on the RHOSO data plane.
5.4.1. NICs and OVN gateways Copy linkLink copied to clipboard!
A deployment can have OVN gateways for external network traffic. You can deploy OVN gateways on the control plane or the data plane.
If deploy gateways on the control plane, gateway requires a dedicated NIC on a control plane node. For more information, see Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment.
You can also deploy OVN gateways on dedicated Networker nodes on the data plane. Each Networker node has at least one NIC to connect the Networker node to the control plane and the external network. For more information, see Configuring Networker nodes.
When you run gateway services on the control plane, OVN version updates may cause some disruption of gateway traffic, because OpenShift requires service restarts for OVN updates. Data plane nodes, controlled by RHEL, do not require service restart after an OVN version updates.
5.4.2. NICs and scaling considerations Copy linkLink copied to clipboard!
Network requirements vary based on environment and business requirements. For example, you may require the following networking capabilities:
- Dedicated NICs on control plane nodes for particular RHOSP isolated networks.
- Port switches with VLANs for the required isolated networks.
Consult with your RHOCP and network administrators about whether these are requirements in your deployment. Each Compute node requires at least one NIC. You can scale up to provide connections to the isolated networks.
5.5. The nmstate provider for os-net-config Copy linkLink copied to clipboard!
RHOSO uses os-net-config to configure network properties on your data plane nodes. You can choose between two os-net-config providers: the legacy ifcfg provider and its replacement, the nmstate provider. Eventually, support of the ifcfg provider will be deprecated and removed.
In most cases, you should use the nmstate provider. If you rely on features available in the ifcfg provider but not yet available in the nmstate provider, contact your Red Hat support representative. In this RHOSO release, the default os-net-config provider in RHOSO is nmstate. To use the ifcfg provider instead, set edpm_network_config_nmstate: false in your OpenStackDataPlaneNodeSet CR.
5.6. Network functions virtualization (NFV) Copy linkLink copied to clipboard!
Network functions virtualization (NFV) is a software-based solution that helps communication service providers (CSPs) to move beyond the traditional, proprietary hardware to achieve greater efficiency and agility and to reduce operational costs.
Using NFV in a Red Hat OpenStack Services on OpenShift (RHOSO) environment allows for IT and network convergence by providing a virtualized infrastructure that uses the standard virtualization technologies to virtualize network functions (VNFs) that run on hardware devices such as switches, routers, and storage. An NFV environment takes advantage of Data Plane Development Kit (DPDK) and Single Root I/O Virtualization (SR-IOV) technologies to improve packet processing speeds.
If you choose an NFV deployment, you must use Deploying a Network Functions Virtualization environment as your deployment guide instead of Deploying Red Hat OpenStack Services on OpenShift.
5.7. Additional resources Copy linkLink copied to clipboard!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration