Chapter 5. Preparing networks for Red Hat OpenStack Services on OpenShift
To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster. The following procedures are for BGP networks. For information about configuring L2 networks, see Preparing RHOCP for RHOSO networks.
5.1. Default Red Hat OpenStack Services on OpenShift networks for BGP Copy linkLink copied to clipboard!
The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment:
- Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
External network: (optional) used when required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
- To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
- Internal API network: used for internal communication between RHOSO components.
- Storage network: used for block storage, RBD, NFS, FC, and iSCSI.
- Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.
- Octavia controller network: (optional) used to connect Load-balancing service (octavia) controllers running in the control plane.
Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_network
to replicate data.NoteFor more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide.
The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.
To ensure that the ctlplane
network can be accessed externally, the MetalLB IPAddressPool
and NetworkAttachmentDefinition
ipam
ranges for the ctlplane
network should be on a network that is advertised by BGP. The network that the data plane nodes can be reached from is the network that you use in the OpenStackDataPlaneNodeSet
custom resources (CRs).
Network name | CIDR |
NetConfig |
MetalLB |
net-attach-def |
| 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 |
| 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a |
| 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 |
| 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 |
| 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 |
| 172.23.0.0/24 | n/a | n/a | 172.23.0.30 - 172.23.0.70 |
| 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 |
5.2. Dynamic routing environment for distributed zones Copy linkLink copied to clipboard!
The topology of a distributed control plane environment includes three Red Hat OpenShift Container Platform (RHOCP) zones. Each zone has at least one worker node that hosts the control plane services and one Compute node. Each node has two network interfaces, eth2
and eth3
, that are configured with the IP addresses for the subnets of the zone in which the node is located.
An additional IP address from the bgpmainnet
network is configured on the loopback interface of each node. This is the IP address that each node uses to communicate with each other. The BGP NIC 1 and NIC 2 exist only for the purpose of establishing L2 connectivity within the boundaries of a zone. The bgpmainnet
network is defined as 99.99.0.0/16 but has subnets for each zone.
The following table specifies the networks that establish connectivity to the fabric using eth2
and eth3
with different IP addresses per zone and rack and also a global bgpmainnet
that is used as a source for the traffic.
Network name | Zone 0 | Zone 1 | Zone 2 |
---|---|---|---|
BGP Net1 ( | 100.64.0.0/24 | 100.64.1.0 | 100.64.2.0 |
BGP Net2 ( | 100.65.0.0/24 | 100.65.1.0/24 | 100.65.2.0 |
Bgpmainnet (loopback) | 99.99.0.0/24 | 99.99.1.0/24 | 99.99.2.0/24 |
5.3. Preparing RHOCP for BGP networks on RHOSO Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.
The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
5.3.1. Disabling rcp_filter on the BGP interface of the RHOCP node Copy linkLink copied to clipboard!
In order for Red Hat OpenShift Container Platform (RHOCP) worker nodes to forward traffic based on the BGP advertisements they receive, you must disable the reverse path filters on the BGP interfaces of the RHOCP worker nodes that run RHOSO services.
Procedure
Create a manifest file named
tuned.yaml
with content similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and create the
Tuned
resource:oc create -f tuned.yaml
$ oc create -f tuned.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2. Preparing RHOCP with isolated network interfaces for BGP Copy linkLink copied to clipboard!
Create a NodeNetworkConfigurationPolicy
(nncp
) custom resource (CR) to configure the interfaces for each isolated network on each worker node in the Red Hat OpenShift Container Platform (RHOCP) cluster.
Procedure
-
Create a
NodeNetworkConfigurationPolicy
(nncp
) CR file on your workstation, for example,openstack-nncp-bgp.yaml
. Retrieve the names of the worker nodes in the RHOCP cluster:
oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Discover the network configuration:
oc get nns/<worker_node> -o yaml | more
$ oc get nns/<worker_node> -o yaml | more
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<worker_node>
with the name of a worker node retrieved in step 2, for example,worker-1
. Repeat this step for each worker node.
-
Replace
In the
nncp
CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks for BGP.In the following example, the
nncp
CR configures the multiple unconnected bridges that you use to map the Red Hat OpenStack Services on OpenShift (RHOSO) networks. You use BGP interfaces to peer with the network fabric and establish connectivity. The loopback interface is configured with the BGP network source address,99.99.0.x
. You can optionally dedicate a NIC to thectlplane
network.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
nncp
CR in the cluster:oc apply -f openstack-nncp-bgp.yaml
$ oc apply -f openstack-nncp-bgp.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
nncp
CR is created:oc get nncp -w
$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.3. Attaching service pods to the isolated networks for BGP Copy linkLink copied to clipboard!
Create a NetworkAttachmentDefinition
(net-attach-def
) custom resource (CR) for each isolated network to attach the service pods to the networks.
Procedure
-
Create a
NetworkAttachmentDefinition
(net-attach-def
) CR file on your workstation, for example,openstack-net-attach-def.yaml
. In the
NetworkAttachmentDefinition
CR file, configure aNetworkAttachmentDefinition
resource for each isolated network to attach a service deployment pod to the network. The following example creates aNetworkAttachmentDefinition
resource that uses a type bridge interface with specific gateway configurations and additional options:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where the services are deployed.
- 2
- The node interface name associated with the network, as defined in the
nncp
CR. - 3
- Optional: Set to
true
to enable IP masquerading. If the gateway does not have an IP address,ipMasq
has no effect. The default value isfalse
. - 4
- The
whereabouts
CNI IPAM plugin to assign IPs to the created pods from the range.30 - .70
. - 5
- The IP address pool range must not overlap with the MetalLB
IPAddressPool
range and theNetConfig allocationRange
.
NoteSet
"ipMasq": true
when the data plane nodes do not have the necessary routes, or when the data plane nodes do not have connection to the control plane network before Free Range Routing (FRR) is configured.Create the
NetworkAttachmentDefinition
CR in the cluster:oc apply -f openstack-net-attach-def.yaml
$ oc apply -f openstack-net-attach-def.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
NetworkAttachmentDefinition
CR is created:oc get net-attach-def -n openstack
$ oc get net-attach-def -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.4. Preparing RHOCP for RHOSO network VIPS for BGP Copy linkLink copied to clipboard!
You must create pairs of BGPPeer
custom resources (CRs) to define which leaf switch connects to the eth2
and eth3
interfaces on each node. For example, worker-0
has two BGPPeer
CRs, one for leaf-0 and one for leaf-1. For information about BGP peers, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/ingress_and_load_balancing/load-balancing-with-metallb#nw-metallb-configure-bgppeer_configure-metallb-bgp-peers [Configuring a BGP peer].
You must also create IPAddressPool
CRs to define the network ranges to be advertised, and a BGPAdvertisement
CR that defines how the BGPPeer
CRs are announced and links the IPAddressPool
CRs to the BGPPeer
CRs that receive the advertisements.
Procedure
-
Create a
BGPPeer
CR file on your workstation, for example,bgppeers.yaml
. Configure the pairs of
BGPPeer
CRs for each node to peer with. The following example configures twoBGPPeer
CRs for theworker-0
node, one for leaf-0 and one for leaf-1:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the BGPPeer CRs:
oc create -f bgppeers.yaml
$ oc create -f bgppeers.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create an
IPAddressPool
CR file on your workstation, for example,ipaddresspools-bgp.yaml
. In the
IPAddressPool
CR file, configure anIPAddressPool
resource on each isolated network to specify the IP address ranges over which MetalLB has authority:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
IPAddressPool
range must not overlap with thewhereabouts
IPAM range and the NetConfigallocationRange
.
For information about how to configure the other
IPAddressPool
resource parameters, see Configuring MetalLB BGP peers in RHOCP Networking overview.Create the
IPAddressPool
CR in the cluster:oc apply -f ipaddresspools-bgp.yaml
$ oc apply -f ipaddresspools-bgp.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
IPAddressPool
CR is created:oc describe -n metallb-system IPAddressPool
$ oc describe -n metallb-system IPAddressPool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
BGPAdvertisement
CR file on your workstation, for example,bgpadvert.yaml
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Lists all the
BGPPeer
CRs you defined for the peer IP addresses that each RHOCP node needs to communicate with.
Create the
BGPAdvertisement
CR in the cluster:oc apply -f bgpadvert.yaml
$ oc apply -f bgpadvert.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.
Check the network back end used by your cluster:
oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
$ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:
oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Creating the data plane network for BGP Copy linkLink copied to clipboard!
To create the data plane network, you define a NetConfig
custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI
, Storage
, and External
. Each network definition must include the IP address assignment.
Use the following commands to view the NetConfig
CRD definition and specification schema:
oc describe crd netconfig oc explain netconfig.spec
$ oc describe crd netconfig
$ oc explain netconfig.spec
Procedure
-
Create a file named
netconfig_bgp.yaml
on your workstation. Add the following configuration to
netconfig_bgp.yaml
to create theNetConfig
CR:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: bgp-netconfig namespace: openstack
apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: bgp-netconfig namespace: openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
netconfig_bgp.yaml
file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks for BGP. The following example creates isolated networks for the data plane:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the network, for example,
CtlPlane
. - 2
- The IPv4 subnet specification.
- 3
- The name of the subnet, for example,
subnet1
. - 4
- The
NetConfig
allocationRange
. TheallocationRange
must not overlap with the MetalLBIPAddressPool
range and the IP address pool range. - 5
- The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks for BGP.
In the
netconfig_bgp.yaml
file, define the network interfaces that establish connectivity within each zone. The following example defines two network interfaces,bgpnet0
foreth2
andbgpnet1
for `eth3, with a subnet for each zone:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
netconfig_bgp.yaml
file, configure the IP address of the loopback interface,bgpmainnet
, used by each node to communicate with each other:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the ` netconfig_bgp.yaml` definition file.
Create the data plane network:
oc create -f netconfig_bgp.yaml -n openstack
$ oc create -f netconfig_bgp.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
BGPConfiguration
CR file namedbgpconfig.yml
to announce the IP addresses of the pods over BGP:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
BGPConfiguration
CR to create the required FRR configurations for each pod:oc create -f bgpconfig.yml
$ oc create -f bgpconfig.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the data plane network is created:
oc get netconfig/openstacknetconfig -n openstack
$ oc get netconfig/openstacknetconfig -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you see errors, check the underlying
network-attach-definition
and node network configuration policies:oc get network-attachment-definitions -n openstack oc get nncp
$ oc get network-attachment-definitions -n openstack $ oc get nncp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow