Chapter 3. Preparing networks for Red Hat OpenStack Services on OpenShift


To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster.

Note

For information about how to add routes to your RHOCP network configuration, see Adding routes to the RHOCP networks in Customizing the Red Hat OpenStack Services on OpenShift deployment.

If you need a centralized gateway for connection to external networks, you can add OVN gateways to the control plane or to dedicated Networker nodes on the data plane.

For information on adding optional OVN gateways to the control plane, see Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment.

For information on adding Networker nodes to the data plane, see Configuring Networker nodes.

The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment:

  • Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
  • External network: (optional) used when required for your environment. For example, you might create an external network for any of the following purposes:

    • To provide virtual machine instances with Internet access.
    • To create flat provider networks that are separate from the control plane.
    • To configure VLAN provider networks on a separate bridge from the control plane.
    • To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
  • Internal API network: used for internal communication between RHOSO components.
  • Storage network: used for block storage, RBD, NFS, FC, and iSCSI.
  • Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.
  • Octavia controller network: used to connect Load-balancing service (octavia) controllers running in the control plane.
  • Designate network: used internally by designate to manage the DNS servers.
  • Designateext network: used to provide external access to the DNS service resolver and the DNS servers.
  • Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data.

    Note

    For more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide.

The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.

Note

By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.

Expand
Table 3.1. Default RHOSO networks
Network nameCIDRNetConfig allocationRangeMetalLB IPAddressPool rangenet-attach-def ipam rangeOCP worker nncp range

ctlplane

192.168.122.0/24

192.168.122.100 - 192.168.122.250

192.168.122.80 - 192.168.122.90

192.168.122.30 - 192.168.122.70

192.168.122.10 - 192.168.122.20

external

10.0.0.0/24

10.0.0.100 - 10.0.0.250

n/a

n/a

n/a

internalapi

172.17.0.0/24

172.17.0.100 - 172.17.0.250

172.17.0.80 - 172.17.0.90

172.17.0.30 - 172.17.0.70

172.17.0.10 - 172.17.0.20

storage

172.18.0.0/24

172.18.0.100 - 172.18.0.250

n/a

172.18.0.30 - 172.18.0.70

172.18.0.10 - 172.18.0.20

tenant

172.19.0.0/24

172.19.0.100 - 172.19.0.250

n/a

172.19.0.30 - 172.19.0.70

172.19.0.10 - 172.19.0.20

octavia

172.23.0.0/24

n/a

n/a

172.23.0.30 - 172.23.0.70

n/a

designate

172.26.0.0/24

n/a

n/a

172.26.0.30 - 172.26.0.70

172.26.0.10 - 172.26.0.20

designateext

172.34.0.0/24

n/a

172.34.0.80 - 172.34.0.120

172.34.0.30 - 172.34.0.70

172.34.0.10 - 172.34.0.20

storageMgmt

172.20.0.0/24

172.20.0.100 - 172.20.0.250

n/a

172.20.0.30 - 172.20.0.70

172.20.0.10 - 172.20.0.20

The following table specifies the networks that establish connectivity to the fabric using eth2 and eth3 with different IP addresses per zone and rack and also a global bgpmainnet that is used as a source for the traffic:

Expand
Table 3.2. Zone connectivity
Network nameZone 0Zone 1Zone 2

BGP Net1 (eth2)

100.64.0.0/24

100.64.1.0

100.64.2.0

BGP Net2 (eth3)

100.65.0.0/24

100.65.1.0/24

100.65.2.0

Bgpmainnet (loopback)

99.99.0.0/24

99.99.1.0/24

99.99.2.0/24

3.2. Preparing RHOCP for RHOSO networks

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.

Important

The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.

Note

The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:

Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interfaces for each isolated network on each worker node in RHOCP cluster.

Procedure

  1. Create a NodeNetworkConfigurationPolicy (nncp) CR file on your workstation, for example, openstack-nncp.yaml.
  2. Retrieve the names of the worker nodes in the RHOCP cluster:

    $ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
    Copy to Clipboard Toggle word wrap
  3. Discover the network configuration:

    $ oc get nns/<worker_node> -o yaml | more
    Copy to Clipboard Toggle word wrap
    • Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1. Repeat this step for each worker node.
  4. In the nncp CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks.

    In the following example, the nncp CR configures the enp6s0 interface for worker node 1, osp-enp6s0-worker-1, to use VLAN interfaces with IPv4 addresses for network isolation:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: osp-enp6s0-worker-1
    spec:
      desiredState:
        interfaces:
        - description: internalapi vlan interface
          ipv4:
            address:
            - ip: 172.17.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: internalapi
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 20
            reorder-headers: true
        - description: storage vlan interface
          ipv4:
            address:
            - ip: 172.18.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: storage
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 21
            reorder-headers: true
        - description: tenant vlan interface
          ipv4:
            address:
            - ip: 172.19.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: tenant
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 22
            reorder-headers: true
        - description: Configuring enp6s0
          ipv4:
            address:
            - ip: 192.168.122.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          mtu: 1500
          name: enp6s0
          state: up
          type: ethernet
        - description: octavia vlan interface
          name: octavia
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 24
            reorder-headers: true
        - bridge:
            options:
              stp:
                enabled: false
            port:
            - name: enp6s0.24
          description: Configuring bridge octbr
          mtu: 1500
          name: octbr
          state: up
          type: linux-bridge
        - description: designate vlan interface
          ipv4:
            address:
            - ip: 172.26.0.10
              prefix-length: "24"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: designate
          state: up
          type: vlan
          vlan:
            base-iface: enp7s0
            id: "25"
            reorder-headers: true
        - description: designate external vlan interface
          ipv4:
            address:
            - ip: 172.34.0.10
              prefix-length: "24"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: designateext
          state: up
          type: vlan
          vlan:
            base-iface: enp7s0
            id: "26"
            reorder-headers: true
      nodeSelector:
        kubernetes.io/hostname: worker-1
        node-role.kubernetes.io/worker: ""
    Copy to Clipboard Toggle word wrap
  5. Create the nncp CR in the cluster:

    $ oc apply -f openstack-nncp.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the nncp CR is created:

    $ oc get nncp -w
    NAME                        STATUS        REASON
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Available     SuccessfullyConfigured
    Copy to Clipboard Toggle word wrap

Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.

Procedure

  1. Create a NetworkAttachmentDefinition (net-attach-def) CR file on your workstation, for example, openstack-net-attach-def.yaml.
  2. In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for each isolated network to attach a service deployment pod to the network. The following examples create a NetworkAttachmentDefinition resource for the following networks:

    • internalapi, storage, ctlplane, and tenant networks of type macvlan.
    • octavia, the load-balancing management network, of type bridge. This network attachment connects pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator.
    • designate network used internally by the DNS service (designate) to manage the DNS servers.
    • designateext network used to provide external access to the DNS service resolver and the DNS servers.
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: internalapi
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "internalapi",
          "type": "macvlan",
          "master": "internalapi",
          "ipam": {
            "type": "whereabouts",
            "range": "172.17.0.0/24",
            "range_start": "172.17.0.30",
            "range_end": "172.17.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: ctlplane
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "ctlplane",
          "type": "macvlan",
          "master": "enp6s0",
          "ipam": {
            "type": "whereabouts",
            "range": "192.168.122.0/24",
            "range_start": "192.168.122.30",
            "range_end": "192.168.122.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: storage
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "storage",
          "type": "macvlan",
          "master": "storage",
          "ipam": {
            "type": "whereabouts",
            "range": "172.18.0.0/24",
            "range_start": "172.18.0.30",
            "range_end": "172.18.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: tenant
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "tenant",
          "type": "macvlan",
          "master": "tenant",
          "ipam": {
            "type": "whereabouts",
            "range": "172.19.0.0/24",
            "range_start": "172.19.0.30",
            "range_end": "172.19.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      labels:
        osp/net: octavia
      name: octavia
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "octavia",
          "type": "bridge",
          "bridge": "octbr",
          "ipam": {
            "type": "whereabouts",
            "range": "172.23.0.0/24",
            "range_start": "172.23.0.30",
            "range_end": "172.23.0.70",
            "routes": [
               {
                 "dst": "172.24.0.0/16",
                 "gw" : "172.23.0.150"
               }
             ]
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: designate
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "designate",
          "type": "macvlan",
          "master": "designate",
          "ipam": {
            "type": "whereabouts",
            "range": "172.26.0.0/16",
            "range_start": "172.26.0.30",
            "range_end": "172.26.0.70",
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: designateext
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "designateext",
          "type": "macvlan",
          "master": "designateext",
          "ipam": {
            "type": "whereabouts",
            "range": "172.34.0.0/16",
            "range_start": "172.34.0.30",
            "range_end": "172.34.0.70",
          }
        }
    Copy to Clipboard Toggle word wrap
    • metadata.namespace: The namespace where the services are deployed.
    • "master": The node interface name associated with the network, as defined in the nncp CR.
    • "ipam": The whereabouts CNI IPAM plug-in assigns IPs to the created pods from the range .30 - .70.
    • "range_start" - "range_end": The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange.
  3. Create the NetworkAttachmentDefinition CR in the cluster:

    $ oc apply -f openstack-net-attach-def.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the NetworkAttachmentDefinition CR is created:

    $ oc get net-attach-def -n openstack
    Copy to Clipboard Toggle word wrap

3.2.3. Preparing RHOCP for RHOSO network VIPS

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.

Procedure

  1. Create an IPAddressPool CR file on your workstation, for example, openstack-ipaddresspools.yaml.
  2. In the IPAddressPool CR file, configure an IPAddressPool resource on the isolated network to specify the IP address ranges over which MetalLB has authority:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: ctlplane
    spec:
      addresses:
        - 192.168.122.80-192.168.122.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: internalapi
      namespace: metallb-system
    spec:
      addresses:
        - 172.17.0.80-172.17.0.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: storage
    spec:
      addresses:
        - 172.18.0.80-172.18.0.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: tenant
    spec:
      addresses:
        - 172.19.0.80-172.19.0.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: designateext
    spec:
      addresses:
        - 172.34.0.80-172.34.0.120
      autoAssign: true
      avoidBuggyIPs: false
    ---
    Copy to Clipboard Toggle word wrap
    • spec.addresses: The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange.

    For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.

  3. Create the IPAddressPool CR in the cluster:

    $ oc apply -f openstack-ipaddresspools.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the IPAddressPool CR is created:

    $ oc describe -n metallb-system IPAddressPool
    Copy to Clipboard Toggle word wrap
  5. Create a L2Advertisement CR file on your workstation, for example, openstack-l2advertisement.yaml.
  6. In the L2Advertisement CR file, configure L2Advertisement CRs to define which node advertises a service to the local network. Create one L2Advertisement resource for each network.

    In the following example, each L2Advertisement CR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:

    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: ctlplane
      namespace: metallb-system
    spec:
      ipAddressPools:
      - ctlplane
      interfaces:
      - enp6s0
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: internalapi
      namespace: metallb-system
    spec:
      ipAddressPools:
      - internalapi
      interfaces:
      - internalapi
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: storage
      namespace: metallb-system
    spec:
      ipAddressPools:
      - storage
      interfaces:
      - storage
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: tenant
      namespace: metallb-system
    spec:
      ipAddressPools:
      - tenant
      interfaces:
      - tenant
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: designateext
      namespace: metallb-system
    spec:
      ipAddressPools:
      - designateext
      interfaces:
      - designateext
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    Copy to Clipboard Toggle word wrap
    • spec.interfaces: The interface where the VIPs requested from the VLAN address pool are announced.

    For information about how to configure the other L2Advertisement resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.

  7. Create the L2Advertisement CRs in the cluster:

    $ oc apply -f openstack-l2advertisement.yaml
    Copy to Clipboard Toggle word wrap
  8. Verify that the L2Advertisement CRs are created:

    $ oc get -n metallb-system L2Advertisement
    NAME          IPADDRESSPOOLS    IPADDRESSPOOL SELECTORS   INTERFACES
    ctlplane      ["ctlplane"]                                ["enp6s0"]
    designateext  ["designateext"]                            ["designateext"]
    internalapi   ["internalapi"]                             ["internalapi"]
    storage       ["storage"]                                 ["storage"]
    tenant        ["tenant"]                                  ["tenant"]
    Copy to Clipboard Toggle word wrap
  9. If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.

    1. Check the network back end used by your cluster:

      $ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
      Copy to Clipboard Toggle word wrap
    2. If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:

      $ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
      Copy to Clipboard Toggle word wrap

3.3. Creating the data plane network

To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI, Storage, and External. Each network definition must include the IP address assignment.

Tip

Use the following commands to view the NetConfig CRD definition and specification schema:

$ oc describe crd netconfig

$ oc explain netconfig.spec
Copy to Clipboard Toggle word wrap

Procedure

  1. Create a file named openstack_netconfig.yaml on your workstation.
  2. Add the following configuration to openstack_netconfig.yaml to create the NetConfig CR:

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: openstacknetconfig
      namespace: openstack
    Copy to Clipboard Toggle word wrap
  3. In the openstack_netconfig.yaml file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks. The following example creates isolated networks for the data plane:

    spec:
      networks:
      - name: CtlPlane
        dnsDomain: ctlplane.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 192.168.122.120
            start: 192.168.122.100
          - end: 192.168.122.200
            start: 192.168.122.150
          cidr: 192.168.122.0/24
          gateway: 192.168.122.1
      - name: InternalApi
        dnsDomain: internalapi.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.17.0.250
            start: 172.17.0.100
          excludeAddresses:
          - 172.17.0.10
          - 172.17.0.12
          cidr: 172.17.0.0/24
          vlan: 20
      - name: External
        dnsDomain: external.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 10.0.0.250
            start: 10.0.0.100
          cidr: 10.0.0.0/24
          gateway: 10.0.0.1
      - name: Storage
        dnsDomain: storage.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.18.0.250
            start: 172.18.0.100
          cidr: 172.18.0.0/24
          vlan: 21
      - name: Tenant
        dnsDomain: tenant.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.19.0.250
            start: 172.19.0.100
          cidr: 172.19.0.0/24
          vlan: 22
    Copy to Clipboard Toggle word wrap
    • spec.networks.name: The name of the network, for example, CtlPlane.
    • spec.networks.subnets: The IPv4 subnet specification.
    • spec.networks.subnets.name: The name of the subnet, for example, subnet1.
    • spec.networks.subnets.allocationRanges: The NetConfig allocationRange. The allocationRange must not overlap with the MetalLB IPAddressPool range and the IP address pool range.
    • spec.networks.subnets.excludeAddresses: Optional: List of IP addresses from the allocation range that must not be used by data plane nodes.
    • spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks.
  4. Save the openstack_netconfig.yaml definition file.
  5. Create the data plane network:

    $ oc create -f openstack_netconfig.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  6. To verify that the data plane network is created, view the openstacknetconfig resource:

    $ oc get netconfig/openstacknetconfig -n openstack
    Copy to Clipboard Toggle word wrap

    If you see errors, check the underlying network-attach-definition and node network configuration policies:

    $ oc get network-attachment-definitions -n openstack
    $ oc get nncp
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat