Questo contenuto non è disponibile nella lingua selezionata.
Chapter 3. Preparing networks for Red Hat OpenStack Services on OpenShift
To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster.
If you need a centralized gateway for connection to external networks, you can add OVN gateways to the control plane or to dedicated Networker nodes on the data plane. For information about adding optional OVN gateways to the control plane, see Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment.
3.1. Networks for Red Hat OpenStack Services on OpenShift Copia collegamentoCollegamento copiato negli appunti!
Red Hat OpenStack Services on OpenShift (RHOSO) requires the following physical data center networks.
- Control plane network
- Used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
- Designate network
- Used internally by the RHOSO DNS service (designate) to manage the DNS servers. For more information, see Designate networks in Configuring DNS as a service.
- Designateext network
- Used to provide external access to the DNS service resolver and the DNS servers.
- External network
An optional network that is used when required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
NoteWhen an external network is used for workloads, an OVN gateway is required in some use cases. For more information, see on use cases and available options, see Configuring a control plane OVN gateway with a dedicated NIC in Configuring networking services.
- Internal API network
- Used for internal communication between RHOSO components.
- Octavia network
- Used to connect Load-balancing service (octavia) controllers running in the control plane. For more information, see Octavia network in Configuring load balancing as a service.
- Storage network
- Used for block storage, RBD, NFS, FC, and iSCSI.
- Storage Management network
An optional network that is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_networkto replicate data.NoteFor more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:
- Tenant (project) network
- Used for data communication between virtual machine instances within the cloud deployment.
Figure 3.1. Physical networks for RHOSO
The following table details the default networks used in a RHOSO deployment.
By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.
| Network name | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | net-attach-def ipam range | OCP worker nncp range |
|---|---|---|---|---|---|
|
| 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 | 192.168.122.10 - 192.168.122.20 |
|
| 172.26.0.0/24 | n/a | n/a | 172.26.0.30 - 172.26.0.70 | 172.26.0.10 - 172.26.0.20 |
|
| 172.34.0.0/24 | n/a | 172.34.0.80 - 172.34.0.120 | 172.34.0.30 - 172.34.0.70 | 172.34.0.10 - 172.34.0.20 |
|
| 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a | n/a |
|
| 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 | 172.17.0.10 - 172.17.0.20 |
|
| 172.23.0.0/24 | n/a | n/a | 172.23.0.30 - 172.23.0.70 | n/a |
|
| 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 | 172.18.0.10 - 172.18.0.20 |
|
| 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 | 172.20.0.10 - 172.20.0.20 |
|
| 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 | 172.19.0.10 - 172.19.0.20 |
3.2. Preparing RHOCP for RHOSO networks Copia collegamentoCollegamento copiato negli appunti!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. A RHOSO environment uses isolated networks to separate different types of network traffic, which improves security, performance, and management. You must connect the RHOCP worker nodes to your isolated networks and expose the internal service endpoints on the isolated networks. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.
The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on project (tenant) networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
3.2.1. Preparing RHOCP with isolated network interfaces Copia collegamentoCollegamento copiato negli appunti!
You use the NMState Operator to connect the RHOCP worker nodes to your isolated networks. Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interfaces for each isolated network on each worker node in the RHOCP cluster.
Procedure
-
Create a
NodeNetworkConfigurationPolicy(nncp) CR file on your workstation, for example,openstack-nncp.yaml. Retrieve the names of the worker nodes in the RHOCP cluster:
$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"Discover the network configuration:
$ oc get nns/<worker_node> -o yaml | more-
Replace
<worker_node>with the name of a worker node retrieved in step 2, for example,worker-1. Repeat this step for each worker node.
-
Replace
In the
nncpCR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Networks for Red Hat OpenStack Services on OpenShift.In the following example, the
nncpCR configures theenp6s0interface for worker node 1,osp-enp6s0-worker-1, to use VLAN interfaces with IPv4 addresses for network isolation:apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: osp-enp6s0-worker-1 spec: desiredState: interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: internalapi state: up type: vlan vlan: base-iface: enp6s0 id: 20 reorder-headers: true - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: storage state: up type: vlan vlan: base-iface: enp6s0 id: 21 reorder-headers: true - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: tenant state: up type: vlan vlan: base-iface: enp6s0 id: 22 reorder-headers: true - description: Configuring enp6s0 ipv4: address: - ip: 192.168.122.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false mtu: 1500 name: enp6s0 state: up type: ethernet - description: octavia vlan interface name: octavia state: up type: vlan vlan: base-iface: enp6s0 id: 24 reorder-headers: true - bridge: options: stp: enabled: false port: - name: enp6s0.24 description: Configuring bridge octbr mtu: 1500 name: octbr state: up type: linux-bridge - description: designate vlan interface ipv4: address: - ip: 172.26.0.10 prefix-length: "24" dhcp: false enabled: true ipv6: enabled: false mtu: 1500 name: designate state: up type: vlan vlan: base-iface: enp7s0 id: "25" reorder-headers: true - description: designate external vlan interface ipv4: address: - ip: 172.34.0.10 prefix-length: "24" dhcp: false enabled: true ipv6: enabled: false mtu: 1500 name: designateext state: up type: vlan vlan: base-iface: enp7s0 id: "26" reorder-headers: true nodeSelector: kubernetes.io/hostname: worker-1 node-role.kubernetes.io/worker: ""Create the
nncpCR in the cluster:$ oc apply -f openstack-nncp.yamlVerify that the
nncpCR is created:$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured
3.2.2. Attaching service pods to the isolated networks Copia collegamentoCollegamento copiato negli appunti!
Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.
If you frequently recreate pods in your environment then use the Whereabouts reconciler to manage dynamic IP address assignments for the pods. For more information, see Creating a whereabouts-reconciler daemon set in the RHOCP Multiple networks guide.
Procedure
-
Create a
NetworkAttachmentDefinition(net-attach-def) CR file on your workstation, for example,openstack-net-attach-def.yaml. In the
NetworkAttachmentDefinitionCR file, configure aNetworkAttachmentDefinitionresource for each isolated network to attach a service deployment pod to the network. The following examples create aNetworkAttachmentDefinitionresource for the following networks:-
internalapi,storage,ctlplane, andtenantnetworks of typemacvlan. -
octavia, the load-balancing management network, of typebridge. This network attachment connects pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator. -
designatenetwork used internally by the DNS service (designate) to manage the DNS servers. -
designateextnetwork used to provide external access to the DNS service resolver and the DNS servers.
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "internalapi", "type": "macvlan", "master": "internalapi", "ipam": { "type": "whereabouts", "range": "172.17.0.0/24", "range_start": "172.17.0.30", "range_end": "172.17.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ctlplane namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "ctlplane", "type": "macvlan", "master": "enp6s0", "ipam": { "type": "whereabouts", "range": "192.168.122.0/24", "range_start": "192.168.122.30", "range_end": "192.168.122.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "storage", "type": "macvlan", "master": "storage", "ipam": { "type": "whereabouts", "range": "172.18.0.0/24", "range_start": "172.18.0.30", "range_end": "172.18.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: tenant namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "tenant", "type": "macvlan", "master": "tenant", "ipam": { "type": "whereabouts", "range": "172.19.0.0/24", "range_start": "172.19.0.30", "range_end": "172.19.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: octavia name: octavia namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "octavia", "type": "bridge", "bridge": "octbr", "ipam": { "type": "whereabouts", "range": "172.23.0.0/24", "range_start": "172.23.0.30", "range_end": "172.23.0.70", "routes": [ { "dst": "172.24.0.0/16", "gw" : "172.23.0.150" } ] } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: designate namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "designate", "type": "macvlan", "master": "designate", "ipam": { "type": "whereabouts", "range": "172.26.0.0/16", "range_start": "172.26.0.30", "range_end": "172.26.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: designateext namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "designateext", "type": "macvlan", "master": "designateext", "ipam": { "type": "whereabouts", "range": "172.34.0.0/16", "range_start": "172.34.0.30", "range_end": "172.34.0.70" } }-
metadata.namespace: The namespace where the services are deployed. -
"master": The node interface name associated with the network, as defined in thenncpCR. -
"ipam": ThewhereaboutsCNI IPAM plug-in assigns IPs to the created pods from the range.30 - .70. -
"range_start" - "range_end": The IP address pool range must not overlap with the MetalLBIPAddressPoolrange and theNetConfig allocationRange.
-
Create the
NetworkAttachmentDefinitionCR in the cluster:$ oc apply -f openstack-net-attach-def.yamlVerify that the
NetworkAttachmentDefinitionCR is created:$ oc get net-attach-def -n openstack
3.2.3. Preparing RHOCP for RHOSO network VIPS Copia collegamentoCollegamento copiato negli appunti!
You use the MetalLB Operator to expose internal service endpoints on the isolated networks. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
Procedure
-
Create an
IPAddressPoolCR file on your workstation, for example,openstack-ipaddresspools.yaml. In the
IPAddressPoolCR file, configure anIPAddressPoolresource on the isolated network to specify the IP address ranges over which MetalLB has authority:apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: ctlplane spec: addresses: - 192.168.122.80-192.168.122.90 autoAssign: true avoidBuggyIPs: false serviceAllocation: namespaces: - openstack --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: internalapi namespace: metallb-system spec: addresses: - 172.17.0.80-172.17.0.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: designateext spec: addresses: - 172.34.0.80-172.34.0.120 autoAssign: true avoidBuggyIPs: false ----
spec.addresses: TheIPAddressPoolrange must not overlap with thewhereaboutsIPAM range and the NetConfigallocationRange. -
spec.serviceAllocation: Specify the namespaces that can consume IP addresses from theIPAddressPoolrange. This is an optional field that you can configure to prevent non-RHOSO services hosted on your RHOCP cluster from consuming IP addresses from theIPAddressPoolrange.
For information about how to configure the other
IPAddressPoolresource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.-
Create the
IPAddressPoolCR in the cluster:$ oc apply -f openstack-ipaddresspools.yamlVerify that the
IPAddressPoolCR is created:$ oc describe -n metallb-system IPAddressPool-
Create a
L2AdvertisementCR file on your workstation, for example,openstack-l2advertisement.yaml. In the
L2AdvertisementCR file, configureL2AdvertisementCRs to define which node advertises a service to the local network. Create oneL2Advertisementresource for each network.In the following example, each
L2AdvertisementCR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: ctlplane namespace: metallb-system spec: ipAddressPools: - ctlplane interfaces: - enp6s0 nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: "" --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: internalapi namespace: metallb-system spec: ipAddressPools: - internalapi interfaces: - internalapi nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: "" --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: designateext namespace: metallb-system spec: ipAddressPools: - designateext interfaces: - designateext nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: ""-
spec.interfaces: The interface where the VIPs requested from the VLAN address pool are announced.
For information about how to configure the other
L2Advertisementresource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.-
Create the
L2AdvertisementCRs in the cluster:$ oc apply -f openstack-l2advertisement.yamlVerify that the
L2AdvertisementCRs are created:$ oc get -n metallb-system L2Advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES ctlplane ["ctlplane"] ["enp6s0"] designateext ["designateext"] ["designateext"] internalapi ["internalapi"] ["internalapi"] storage ["storage"] ["storage"] tenant ["tenant"] ["tenant"]If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.
Check the network back end used by your cluster:
$ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
3.3. Creating the data plane network Copia collegamentoCollegamento copiato negli appunti!
To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as internalapi, storage, and external. Each network definition must include the IP address assignment.
Use the following commands to view the NetConfig CRD definition and specification schema:
$ oc describe crd netconfig
$ oc explain netconfig.spec
Procedure
-
Create a file named
openstack_netconfig.yamlon your workstation. Add the following configuration to
openstack_netconfig.yamlto create theNetConfigCR:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstackIn the
openstack_netconfig.yamlfile, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Networks for Red Hat OpenStack Services on OpenShift.NoteIf you are using pre-provisioned data plane nodes, then the control plane network and IP address must match the pre-provisioned data plane nodes. If the
ctlplanenetwork uses tagged VLANs, then the VLAN ID must also match the VLAN ID on the pre-provisioned data plane node.The following example creates isolated networks for the data plane:
spec: networks: - name: ctlplane dnsDomain: ctlplane.example.com subnets: - name: subnet1 allocationRanges: - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.200 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: internalapi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 excludeAddresses: - 172.17.0.10 - 172.17.0.12 cidr: 172.17.0.0/24 vlan: 20 - name: external dnsDomain: external.example.com subnets: - name: subnet1 allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 gateway: 10.0.0.1 - name: storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22-
spec.networks.name: The name of the network, for example,ctlplane. -
spec.networks.subnets: The IPv4 subnet specification. -
spec.networks.subnets.name: The name of the subnet, for example,subnet1. -
spec.networks.subnets.allocationRanges: TheNetConfigallocationRange. TheallocationRangemust not overlap with the MetalLBIPAddressPoolrange and the IP address pool range. -
spec.networks.subnets.excludeAddresses: Optional: List of IP addresses from the allocation range that must not be used by data plane nodes. -
spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Networks for Red Hat OpenStack Services on OpenShift.
-
-
Save the
openstack_netconfig.yamldefinition file. Create the data plane network:
$ oc create -f openstack_netconfig.yaml -n openstackTo verify that the data plane network is created, view the
openstacknetconfigresource:$ oc get netconfig/openstacknetconfig -n openstackIf you see errors, check the underlying
network-attach-definitionand node network configuration policies:$ oc get network-attachment-definitions -n openstack $ oc get nncp