Chapter 5. Preparing networks for Red Hat OpenStack Services on OpenShift


To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster. The following procedures are for BGP networks. For information about configuring L2 networks, see Preparing RHOCP for RHOSO networks.

The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment:

  • Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
  • External network: (optional) used when required for your environment. For example, you might create an external network for any of the following purposes:

    • To provide virtual machine instances with Internet access.
    • To create flat provider networks that are separate from the control plane.
    • To configure VLAN provider networks on a separate bridge from the control plane.
    • To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
  • Internal API network: used for internal communication between RHOSO components.
  • Storage network: used for block storage, RBD, NFS, FC, and iSCSI.
  • Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.
  • Octavia controller network: (optional) used to connect Load-balancing service (octavia) controllers running in the control plane.
  • Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data.

    Note

    For more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide.

The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.

Note

To ensure that the ctlplane network can be accessed externally, the MetalLB IPAddressPool and NetworkAttachmentDefinition ipam ranges for the ctlplane network should be on a network that is advertised by BGP. The network that the data plane nodes can be reached from is the network that you use in the OpenStackDataPlaneNodeSet custom resources (CRs).

Expand
Table 5.1. Default RHOSO networks for BGP

Network name

CIDR

NetConfig allocationRange

MetalLB IPAddressPool range

net-attach-def ipam range

ctlplane

192.168.122.0/24

192.168.122.100 - 192.168.122.250

192.168.122.80 - 192.168.122.90

192.168.122.30 - 192.168.122.70

external

10.0.0.0/24

10.0.0.100 - 10.0.0.250

n/a

n/a

internalapi

172.17.0.0/24

172.17.0.100 - 172.17.0.250

172.17.0.80 - 172.17.0.90

172.17.0.30 - 172.17.0.70

storage

172.18.0.0/24

172.18.0.100 - 172.18.0.250

n/a

172.18.0.30 - 172.18.0.70

tenant

172.19.0.0/24

172.19.0.100 - 172.19.0.250

n/a

172.19.0.30 - 172.19.0.70

octavia

172.23.0.0/24

n/a

n/a

172.23.0.30 - 172.23.0.70

storageMgmt

172.20.0.0/24

172.20.0.100 - 172.20.0.250

n/a

172.20.0.30 - 172.20.0.70

The topology of a distributed control plane environment includes three Red Hat OpenShift Container Platform (RHOCP) zones. Each zone has at least one worker node that hosts the control plane services and one Compute node. Each node has two network interfaces, eth2 and eth3, that are configured with the IP addresses for the subnets of the zone in which the node is located.

An additional IP address from the bgpmainnet network is configured on the loopback interface of each node. This is the IP address that each node uses to communicate with each other. The BGP NIC 1 and NIC 2 exist only for the purpose of establishing L2 connectivity within the boundaries of a zone. The bgpmainnet network is defined as 99.99.0.0/16 but has subnets for each zone.

The following table specifies the networks that establish connectivity to the fabric using eth2 and eth3 with different IP addresses per zone and rack and also a global bgpmainnet that is used as a source for the traffic.

Expand
Table 5.2. Zone connectivity
Network nameZone 0Zone 1Zone 2

BGP Net1 (eth2)

100.64.0.0/24

100.64.1.0

100.64.2.0

BGP Net2 (eth3)

100.65.0.0/24

100.65.1.0/24

100.65.2.0

Bgpmainnet (loopback)

99.99.0.0/24

99.99.1.0/24

99.99.2.0/24

5.3. Preparing RHOCP for BGP networks on RHOSO

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.

Note

The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:

In order for Red Hat OpenShift Container Platform (RHOCP) worker nodes to forward traffic based on the BGP advertisements they receive, you must disable the reverse path filters on the BGP interfaces of the RHOCP worker nodes that run RHOSO services.

Procedure

  1. Create a manifest file named tuned.yaml with content similar to the following:

    apiVersion: tuned.openshift.io/v1
    kind: Tuned
    metadata:
      name: default
      namespace: openshift-cluster-node-tuning-operator
    spec:
      profile:
      - data: |
          [main]
          summary=Optimize systems running OpenShift (provider specific parent profile)
          include=-provider-${f:exec:cat:/var/lib/ocp-tuned/provider},openshift
    
          [sysctl]
          net.ipv4.conf.enp8s0.rp_filter=0
          net.ipv4.conf.enp9s0.rp_filter=0
        name: openshift
      recommend:
      - match:
        - label: kubernetes.io/hostname
          value: worker-0
        - label: kubernetes.io/hostname
          value: worker-1
        - label: kubernetes.io/hostname
          value: worker-2
        - label: node-role.kubernetes.io/master
        operand:
          tunedConfig:
            reapply_sysctl: false
        priority: 15
        profile: openshift-no-reapply-sysctl
    status: {}
    Copy to Clipboard Toggle word wrap
  2. Save the file and create the Tuned resource:

    $ oc create -f tuned.yaml
    Copy to Clipboard Toggle word wrap

Create a NodeNetworkConfigurationPolicy (nncp) custom resource (CR) to configure the interfaces for each isolated network on each worker node in the Red Hat OpenShift Container Platform (RHOCP) cluster.

Procedure

  1. Create a NodeNetworkConfigurationPolicy (nncp) CR file on your workstation, for example, openstack-nncp-bgp.yaml.
  2. Retrieve the names of the worker nodes in the RHOCP cluster:

    $ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
    Copy to Clipboard Toggle word wrap
  3. Discover the network configuration:

    $ oc get nns/<worker_node> -o yaml | more
    Copy to Clipboard Toggle word wrap
    • Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1. Repeat this step for each worker node.
  4. In the nncp CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks for BGP.

    In the following example, the nncp CR configures the multiple unconnected bridges that you use to map the Red Hat OpenStack Services on OpenShift (RHOSO) networks. You use BGP interfaces to peer with the network fabric and establish connectivity. The loopback interface is configured with the BGP network source address, 99.99.0.x. You can optionally dedicate a NIC to the ctlplane network.

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      labels:
        osp/nncm-config-type: standard
      name: worker-0
      namespace: openstack
    spec:
      desiredState:
        dns-resolver:
          config:
            search: []
            server:
            - 192.168.122.1
        interfaces:
        - description: internalapi bridge
          mtu: 1500
          name: internalapi
          state: up
          type: linux-bridge
        - description: storage bridge
          mtu: 1500
          name: storage
          state: up
          type: linux-bridge
        - description: tenant bridge
          mtu: 1500
          name: tenant
          state: up
          type: linux-bridge
        - description: ctlplane bridge
          mtu: 1500
          name: ospbr
          state: up
          type: linux-bridge
        - description: BGP interface 1
          ipv4:
            address:
            - ip: 100.64.0.14
              prefix-length: "30"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: enp8s0
          state: up
          type: ethernet
        - description: BGP interface 2
          ipv4:
            address:
            - ip: 100.65.0.14
              prefix-length: "30"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: enp9s0
          state: up
          type: ethernet
        - description: loopback interface
          ipv4:
            address:
            - ip: 99.99.0.3
              prefix-length: "32"
            dhcp: false
            enabled: true
          mtu: 65536
          name: lo
          state: up
        route-rules:
          config: []
        routes:
          config:
          - destination: 99.99.0.0/16
            next-hop-address: 100.64.0.13
            next-hop-interface: enp8s0
          - destination: 99.99.0.0/16
            next-hop-address: 100.65.0.13
            next-hop-interface: enp9s0
      nodeSelector:
        kubernetes.io/hostname: worker-0
        node-role.kubernetes.io/worker: ""
    Copy to Clipboard Toggle word wrap
  5. Create the nncp CR in the cluster:

    $ oc apply -f openstack-nncp-bgp.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the nncp CR is created:

    $ oc get nncp -w
    NAME                        STATUS        REASON
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Available     SuccessfullyConfigured
    Copy to Clipboard Toggle word wrap

Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.

Procedure

  1. Create a NetworkAttachmentDefinition (net-attach-def) CR file on your workstation, for example, openstack-net-attach-def.yaml.
  2. In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for each isolated network to attach a service deployment pod to the network. The following example creates a NetworkAttachmentDefinition resource that uses a type bridge interface with specific gateway configurations and additional options:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      labels:
        osp/net: internalapi
        osp/net-attach-def-type: standard
      name: internalapi
      namespace: openstack 
    1
    
    spec:
      config: |
    	{
      	"cniVersion": "0.3.1",
      	"name": "internalapi", 
    2
    
      	"type": "bridge",
      	"isDefaultGateway": true,
      	"isGateway": true,
      	"forceAddress": false,
      	"ipMasq": true, 
    3
    
      	"hairpinMode": true,
      	"bridge": "internalapi",
      	"ipam": { 
    4
    
          "type": "whereabouts",
          "range": "172.17.0.0/24",
          "range_start": "172.17.0.30", 
    5
    
          "range_end": "172.17.0.70",
          "gateway": "172.17.0.1"
        }
      }
    Copy to Clipboard Toggle word wrap
    1
    The namespace where the services are deployed.
    2
    The node interface name associated with the network, as defined in the nncp CR.
    3
    Optional: Set to true to enable IP masquerading. If the gateway does not have an IP address, ipMasq has no effect. The default value is false.
    4
    The whereabouts CNI IPAM plugin to assign IPs to the created pods from the range .30 - .70.
    5
    The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange.
    Note

    Set "ipMasq": true when the data plane nodes do not have the necessary routes, or when the data plane nodes do not have connection to the control plane network before Free Range Routing (FRR) is configured.

  3. Create the NetworkAttachmentDefinition CR in the cluster:

    $ oc apply -f openstack-net-attach-def.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the NetworkAttachmentDefinition CR is created:

    $ oc get net-attach-def -n openstack
    Copy to Clipboard Toggle word wrap

You must create pairs of BGPPeer custom resources (CRs) to define which leaf switch connects to the eth2 and eth3 interfaces on each node. For example, worker-0 has two BGPPeer CRs, one for leaf-0 and one for leaf-1. For information about BGP peers, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/ingress_and_load_balancing/load-balancing-with-metallb#nw-metallb-configure-bgppeer_configure-metallb-bgp-peers [Configuring a BGP peer].

You must also create IPAddressPool CRs to define the network ranges to be advertised, and a BGPAdvertisement CR that defines how the BGPPeer CRs are announced and links the IPAddressPool CRs to the BGPPeer CRs that receive the advertisements.

Procedure

  1. Create a BGPPeer CR file on your workstation, for example, bgppeers.yaml.
  2. Configure the pairs of BGPPeer CRs for each node to peer with. The following example configures two BGPPeer CRs for the worker-0 node, one for leaf-0 and one for leaf-1:

    apiVersion: metallb.io/v1beta2
    kind: BGPPeer
    metadata:
      name: bgp-peer-node-0-0
      namespace: metallb-system
    spec:
      myASN: 64999
      nodeSelectors:
      - matchExpressions:
    	- key: kubernetes.io/hostname
      	operator: In
      	values:
      	- worker-0
      password: r3dh4t1234
      peerASN: 64999
      peerAddress: 100.64.0.13
    ---
    apiVersion: metallb.io/v1beta2
    kind: BGPPeer
    metadata:
      name: bgp-peer-node-0-1
      namespace: metallb-system
    spec:
      myASN: 64999
      nodeSelectors:
      - matchExpressions:
    	- key: kubernetes.io/hostname
      	operator: In
      	values:
      	- worker-0
      password: r3dh4t1234
      peerASN: 64999
      peerAddress: 100.65.0.13
    Copy to Clipboard Toggle word wrap
  3. Create the BGPPeer CRs:

    $ oc create -f bgppeers.yaml
    Copy to Clipboard Toggle word wrap
  4. Create an IPAddressPool CR file on your workstation, for example, ipaddresspools-bgp.yaml.
  5. In the IPAddressPool CR file, configure an IPAddressPool resource on each isolated network to specify the IP address ranges over which MetalLB has authority:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: ctlplane
      namespace: metallb-system
    spec:
      addresses:
      - 192.168.125.80-192.168.125.90 
    1
    
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: internalapi
      namespace: metallb-system
    spec:
      addresses:
      - 172.17.0.80-172.17.0.90
    Copy to Clipboard Toggle word wrap
    1
    The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange.

    For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB BGP peers in RHOCP Networking overview.

  6. Create the IPAddressPool CR in the cluster:

    $ oc apply -f ipaddresspools-bgp.yaml
    Copy to Clipboard Toggle word wrap
  7. Verify that the IPAddressPool CR is created:

    $ oc describe -n metallb-system IPAddressPool
    Copy to Clipboard Toggle word wrap
  8. Create a BGPAdvertisement CR file on your workstation, for example, bgpadvert.yaml.

    apiVersion: metallb.io/v1beta1
    kind: BGPAdvertisement
    metadata:
      name: bgpadvertisement
      namespace: metallb-system
    spec:
      ipAddressPools:
      - ctlplane
      - internalapi
      - storage
      - tenant
      peers: 
    1
    
      - bgp-peer-node-0-0
      - bgp-peer-node-0-1
      - bgp-peer-node-1-0
      - bgp-peer-node-1-1
      - bgp-peer-node-2-0
      - bgp-peer-node-2-1
      ...
    Copy to Clipboard Toggle word wrap
    1
    Lists all the BGPPeer CRs you defined for the peer IP addresses that each RHOCP node needs to communicate with.
  9. Create the BGPAdvertisement CR in the cluster:

    $ oc apply -f bgpadvert.yaml
    Copy to Clipboard Toggle word wrap
  10. If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.

    1. Check the network back end used by your cluster:

      $ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
      Copy to Clipboard Toggle word wrap
    2. If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:

      $ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
      Copy to Clipboard Toggle word wrap

5.4. Creating the data plane network for BGP

To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI, Storage, and External. Each network definition must include the IP address assignment.

Tip

Use the following commands to view the NetConfig CRD definition and specification schema:

$ oc describe crd netconfig

$ oc explain netconfig.spec
Copy to Clipboard Toggle word wrap

Procedure

  1. Create a file named netconfig_bgp.yaml on your workstation.
  2. Add the following configuration to netconfig_bgp.yaml to create the NetConfig CR:

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: bgp-netconfig
      namespace: openstack
    Copy to Clipboard Toggle word wrap
  3. In the netconfig_bgp.yaml file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks for BGP. The following example creates isolated networks for the data plane:

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: bgp-netconfig
      namespace: openstack
    spec:
      networks:
      - name: ctlplane 
    1
    
        dnsDomain: ctlplane.example.com
        serviceNetwork: ctlplane
        mtu: 1500
        subnets: 
    2
    
        - name: subnet1 
    3
    
          allocationRanges: 
    4
    
          - end: 192.168.122.120
            start: 192.168.122.100
          - end: 192.168.122.200
            start: 192.168.122.150
          cidr: 192.168.122.0/24
          gateway: 192.168.122.1
        - name: subnet2
          allocationRanges:
          - end: 192.168.123.120
            start: 192.168.123.100
          - end: 192.168.123.200
            start: 192.168.123.150
          cidr: 192.168.123.0/24
          gateway: 192.168.123.1
        - name: subnet3
          allocationRanges:
          - end: 192.168.124.120
            start: 192.168.124.100
          - end: 192.168.124.200
            start: 192.168.124.150
          cidr: 192.168.124.0/24
          gateway: 192.168.124.1
      - name: internalapi
        dnsDomain: internalapi.example.com
        serviceNetwork: internalapi
        mtu: 1500
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.17.0.250
            start: 172.17.0.100
          cidr: 172.17.0.0/24
        vlan: 20 
    5
    
      - name: external
        dnsDomain: external.example.com
        mtu: 1500
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 192.168.32.250
            start: 192.168.32.130
          cidr: 192.168.32.0/20
        vlan: 99
      - name: storage
        dnsDomain: storage.example.com
        mtu: 1500
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.18.0.250
            start: 172.18.0.100
        cidr: 172.18.0.0/24
        vlan: 21
      - name: tenant
        dnsDomain: tenant.example.com
        mtu: 1500
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.19.0.250
            start: 172.19.0.100
          cidr: 172.19.0.0/24
        vlan: 22
    Copy to Clipboard Toggle word wrap
    1
    The name of the network, for example, CtlPlane.
    2
    The IPv4 subnet specification.
    3
    The name of the subnet, for example, subnet1.
    4
    The NetConfig allocationRange. The allocationRange must not overlap with the MetalLB IPAddressPool range and the IP address pool range.
    5
    The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks for BGP.
  4. In the netconfig_bgp.yaml file, define the network interfaces that establish connectivity within each zone. The following example defines two network interfaces, bgpnet0 for eth2 and bgpnet1 for `eth3, with a subnet for each zone:

      - name: bgpnet0 
    1
    
        dnsDomain: bgpnet0.example.com
        serviceNetwork: bgpnet0
        mtu: 1500
        subnets:
        - name: subnet0
          allocationRanges:
          - end: 100.64.0.36
            start: 100.64.0.1
          cidr: 100.64.0.0/24
          gateway: 100.64.0.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.64.0.1
        - name: subnet1
          allocationRanges:
          - end: 100.64.1.36
            start: 100.64.1.1
          cidr: 100.64.1.0/24
          gateway: 100.64.1.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.64.1.1
        - name: subnet2
          allocationRanges:
          - end: 100.64.2.36
            start: 100.64.2.1
          cidr: 100.64.2.0/24
          gateway: 100.64.2.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.64.2.1
      - name: bgpnet1 
    2
    
        dnsDomain: bgpnet1.example.com
        serviceNetwork: bgpnet1
        mtu: 1500
        subnets:
        - name: subnet0
          allocationRanges:
          - end: 100.65.0.36
            start: 100.65.0.1
          cidr: 100.65.0.0/24
          gateway: 100.65.0.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.65.0.1
        - name: subnet1
          allocationRanges:
          - end: 100.65.1.36
            start: 100.65.1.1
          cidr: 100.65.1.0/24
          gateway: 100.65.1.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.65.1.1
        - name: subnet2
          allocationRanges:
          - end: 100.65.2.36
            start: 100.65.2.1
          cidr: 100.65.2.0/24
          gateway: 100.65.2.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.65.2.1
    Copy to Clipboard Toggle word wrap
    1
    bgpnet0 is the network used by the data plane node to communicate with its BGP peer.
    2
    bgpnet1 is the network used by the data plane node to communicate with its BGP peer.
  5. In the netconfig_bgp.yaml file, configure the IP address of the loopback interface, bgpmainnet, used by each node to communicate with each other:

      - name: bgpmainnet
        dnsDomain: bgpmainnet.example.com
        serviceNetwork: bgpmainnet
        mtu: 1500
        subnets:
        - name: subnet0
          allocationRanges:
          - end: 99.99.0.36
            start: 99.99.0.2
          cidr: 99.99.0.0/24
        - name: subnet1
          allocationRanges:
          - end: 99.99.1.36
            start: 99.99.1.2
          cidr: 99.99.1.0/24
        - name: subnet2
          allocationRanges:
          - end: 99.99.2.36
            start: 99.99.2.2
          cidr: 99.99.2.0/24
    Copy to Clipboard Toggle word wrap
  6. Save the ` netconfig_bgp.yaml` definition file.
  7. Create the data plane network:

    $ oc create -f  netconfig_bgp.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  8. Create a BGPConfiguration CR file named bgpconfig.yml to announce the IP addresses of the pods over BGP:

    apiVersion: network.openstack.org/v1beta1
    kind: BGPConfiguration
    metadata:
      name: bgpconfiguration
      namespace: openstack
    spec: {}
    Copy to Clipboard Toggle word wrap
  9. Create the BGPConfiguration CR to create the required FRR configurations for each pod:

    $ oc create -f bgpconfig.yml
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the data plane network is created:

    $ oc get netconfig/openstacknetconfig -n openstack
    Copy to Clipboard Toggle word wrap

    If you see errors, check the underlying network-attach-definition and node network configuration policies:

    $ oc get network-attachment-definitions -n openstack
    $ oc get nncp
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat