Chapter 3. Preparing networks for Red Hat OpenStack Services on OpenShift
To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster.
For information about how to add routes to your RHOCP network configuration, see Adding routes to the RHOCP networks in Customizing the Red Hat OpenStack Services on OpenShift deployment.
If you need a centralized gateway for connection to external networks, you can add OVN gateways to the control plane or to dedicated Networker nodes on the data plane.
For information on adding optional OVN gateways to the control plane, see Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment.
For information on adding Networker nodes to the data plane, see Configuring Networker nodes.
3.1. Default Red Hat OpenStack Services on OpenShift networks Copy linkLink copied to clipboard!
The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment:
- Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
External network: (optional) used when required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
- To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
- Internal API network: used for internal communication between RHOSO components.
- Storage network: used for block storage, RBD, NFS, FC, and iSCSI.
- Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.
- Octavia controller network: used to connect Load-balancing service (octavia) controllers running in the control plane.
- Designate network: used internally by designate to manage the DNS servers.
- Designateext network: used to provide external access to the DNS service resolver and the DNS servers.
Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_network
to replicate data.NoteFor more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide.
The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.
By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.
Network name | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | net-attach-def ipam range | OCP worker nncp range |
---|---|---|---|---|---|
| 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 | 192.168.122.10 - 192.168.122.20 |
| 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a | n/a |
| 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 | 172.17.0.10 - 172.17.0.20 |
| 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 | 172.18.0.10 - 172.18.0.20 |
| 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 | 172.19.0.10 - 172.19.0.20 |
| 172.23.0.0/24 | n/a | n/a | 172.23.0.30 - 172.23.0.70 | n/a |
| 172.26.0.0/24 | n/a | n/a | 172.26.0.30 - 172.26.0.70 | 172.26.0.10 - 172.26.0.20 |
| 172.34.0.0/24 | n/a | 172.34.0.80 - 172.34.0.120 | 172.34.0.30 - 172.34.0.70 | 172.34.0.10 - 172.34.0.20 |
| 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 | 172.20.0.10 - 172.20.0.20 |
The following table specifies the networks that establish connectivity to the fabric using eth2
and eth3
with different IP addresses per zone and rack and also a global bgpmainnet
that is used as a source for the traffic:
Network name | Zone 0 | Zone 1 | Zone 2 |
---|---|---|---|
BGP Net1 ( | 100.64.0.0/24 | 100.64.1.0 | 100.64.2.0 |
BGP Net2 ( | 100.65.0.0/24 | 100.65.1.0/24 | 100.65.2.0 |
Bgpmainnet (loopback) | 99.99.0.0/24 | 99.99.1.0/24 | 99.99.2.0/24 |
3.2. Preparing RHOCP for RHOSO networks Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.
The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
3.2.1. Preparing RHOCP with isolated network interfaces Copy linkLink copied to clipboard!
Create a NodeNetworkConfigurationPolicy
(nncp
) CR to configure the interfaces for each isolated network on each worker node in RHOCP cluster.
Procedure
-
Create a
NodeNetworkConfigurationPolicy
(nncp
) CR file on your workstation, for example,openstack-nncp.yaml
. Retrieve the names of the worker nodes in the RHOCP cluster:
oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Discover the network configuration:
oc get nns/<worker_node> -o yaml | more
$ oc get nns/<worker_node> -o yaml | more
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<worker_node>
with the name of a worker node retrieved in step 2, for example,worker-1
. Repeat this step for each worker node.
-
Replace
In the
nncp
CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks.In the following example, the
nncp
CR configures theenp6s0
interface for worker node 1,osp-enp6s0-worker-1
, to use VLAN interfaces with IPv4 addresses for network isolation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
nncp
CR in the cluster:oc apply -f openstack-nncp.yaml
$ oc apply -f openstack-nncp.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
nncp
CR is created:oc get nncp -w
$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.2. Attaching service pods to the isolated networks Copy linkLink copied to clipboard!
Create a NetworkAttachmentDefinition
(net-attach-def
) custom resource (CR) for each isolated network to attach the service pods to the networks.
Procedure
-
Create a
NetworkAttachmentDefinition
(net-attach-def
) CR file on your workstation, for example,openstack-net-attach-def.yaml
. In the
NetworkAttachmentDefinition
CR file, configure aNetworkAttachmentDefinition
resource for each isolated network to attach a service deployment pod to the network. The following examples create aNetworkAttachmentDefinition
resource for the following networks:-
internalapi
,storage
,ctlplane
, andtenant
networks of typemacvlan
. -
octavia
, the load-balancing management network, of typebridge
. This network attachment connects pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator. -
designate
network used internally by the DNS service (designate) to manage the DNS servers. -
designateext
network used to provide external access to the DNS service resolver and the DNS servers.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.namespace
: The namespace where the services are deployed. -
"master"
: The node interface name associated with the network, as defined in thenncp
CR. -
"ipam"
: Thewhereabouts
CNI IPAM plug-in assigns IPs to the created pods from the range.30 - .70
. -
"range_start" - "range_end"
: The IP address pool range must not overlap with the MetalLBIPAddressPool
range and theNetConfig allocationRange
.
-
Create the
NetworkAttachmentDefinition
CR in the cluster:oc apply -f openstack-net-attach-def.yaml
$ oc apply -f openstack-net-attach-def.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
NetworkAttachmentDefinition
CR is created:oc get net-attach-def -n openstack
$ oc get net-attach-def -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.3. Preparing RHOCP for RHOSO network VIPS Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement
resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool
resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
Procedure
-
Create an
IPAddressPool
CR file on your workstation, for example,openstack-ipaddresspools.yaml
. In the
IPAddressPool
CR file, configure anIPAddressPool
resource on the isolated network to specify the IP address ranges over which MetalLB has authority:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
spec.addresses
: TheIPAddressPool
range must not overlap with thewhereabouts
IPAM range and the NetConfigallocationRange
.
For information about how to configure the other
IPAddressPool
resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.-
Create the
IPAddressPool
CR in the cluster:oc apply -f openstack-ipaddresspools.yaml
$ oc apply -f openstack-ipaddresspools.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
IPAddressPool
CR is created:oc describe -n metallb-system IPAddressPool
$ oc describe -n metallb-system IPAddressPool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
L2Advertisement
CR file on your workstation, for example,openstack-l2advertisement.yaml
. In the
L2Advertisement
CR file, configureL2Advertisement
CRs to define which node advertises a service to the local network. Create oneL2Advertisement
resource for each network.In the following example, each
L2Advertisement
CR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
spec.interfaces
: The interface where the VIPs requested from the VLAN address pool are announced.
For information about how to configure the other
L2Advertisement
resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.-
Create the
L2Advertisement
CRs in the cluster:oc apply -f openstack-l2advertisement.yaml
$ oc apply -f openstack-l2advertisement.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
L2Advertisement
CRs are created:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.
Check the network back end used by your cluster:
oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
$ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:
oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Creating the data plane network Copy linkLink copied to clipboard!
To create the data plane network, you define a NetConfig
custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI
, Storage
, and External
. Each network definition must include the IP address assignment.
Use the following commands to view the NetConfig
CRD definition and specification schema:
oc describe crd netconfig oc explain netconfig.spec
$ oc describe crd netconfig
$ oc explain netconfig.spec
Procedure
-
Create a file named
openstack_netconfig.yaml
on your workstation. Add the following configuration to
openstack_netconfig.yaml
to create theNetConfig
CR:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstack
apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
openstack_netconfig.yaml
file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks. The following example creates isolated networks for the data plane:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
spec.networks.name
: The name of the network, for example,CtlPlane
. -
spec.networks.subnets
: The IPv4 subnet specification. -
spec.networks.subnets.name
: The name of the subnet, for example,subnet1
. -
spec.networks.subnets.allocationRanges
: TheNetConfig
allocationRange
. TheallocationRange
must not overlap with the MetalLBIPAddressPool
range and the IP address pool range. -
spec.networks.subnets.excludeAddresses
: Optional: List of IP addresses from the allocation range that must not be used by data plane nodes. -
spec.networks.subnets.vlan
: The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks.
-
-
Save the
openstack_netconfig.yaml
definition file. Create the data plane network:
oc create -f openstack_netconfig.yaml -n openstack
$ oc create -f openstack_netconfig.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the data plane network is created, view the
openstacknetconfig
resource:oc get netconfig/openstacknetconfig -n openstack
$ oc get netconfig/openstacknetconfig -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you see errors, check the underlying
network-attach-definition
and node network configuration policies:oc get network-attachment-definitions -n openstack oc get nncp
$ oc get network-attachment-definitions -n openstack $ oc get nncp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow