Deploying multiple RHOSO environments on a single RHOCP cluster


Red Hat OpenStack Services on OpenShift 18.0

Deploying multiple Red Hat OpenStack Services on OpenShift environments on a single Red Hat OpenShift Container Platform cluster

OpenStack Documentation Team

Abstract

Learn how to deploy multiple Red Hat OpenStack Services on OpenShift (RHOSO) environments on a single Red Hat OpenShift Container Platform (RHOCP) cluster by using namespace separation.

Providing feedback on Red Hat documentation

We appreciate your feedback. Tell us how we can improve the documentation.

To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.

Procedure

  1. Log in to the Red Hat Atlassian Jira.
  2. Click the following link to open a Create Issue page: Create issue
  3. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
  4. Click Create.
  5. Review the details of the bug you created.

You can deploy multiple independent Red Hat OpenStack Services on OpenShift (RHOSO) environments on a single Red Hat OpenShift Container Platform (RHOCP) cluster by using namespace separation. To deploy each RHOSO environment, you create multiple isolated namespaces and the isolated networks for each namespace, then use the procedures in Deploying Red Hat OpenStack Services on OpenShift to create the control plane and the data planes on each namespace.

Support
Red Hat supports up to five multiple RHOSO environments on a single cluster with namespace separation, for example, environments for development, testing, staging, and production.
Limitations
  • Telemetry visualization is not available in more than one namespace.
  • Multiple zones are not supported, therefore verified architectures such as distributed Compute nodes (DCN) and distributed zones are not supported.
Operator and OpenStack versions
All RHOSO deployments that are hosted on the same RHOCP cluster by using namespace separation run on the same version of RHOSO, because the Openstack Operator custom resource definitions (CRDs) are global to the RHOCP cluster. However, each RHOSO deployment can run different versions of the services deployed by RHOSO because the service containers are managed by the OpenStackVersion custom resource (CR) for each namespace.
Updating multiple deployments on a single cluster

When a new version of Red Hat OpenStack Services on OpenShift (RHOSO) is released, you must update all the control planes that are hosted on the Red Hat OpenShift Container Platform (RHOCP) cluster to the new version by performing the minor update procedure on all namespaces.

Before performing a minor update, you must ensure that all of your deployed environments are on the same version. Avoid performing the minor update on all environments at the same time to prevent having issues on all environments. You should perform and test the minor update on your environments in the following order to ensure it is performing as expected, before rolling it out to your production environment:

  1. Development environment
  2. Staging environment
  3. Testing environment

To create multiple independent RHOSO environments using namespace separation, you must complete the following tasks:

  1. Plan the networking for each isolated RHOSO environment.
  2. Create the namespaces for each RHOSO environment.
  3. Create a Secret custom resource (CR) for each namespace to provide secure access to the RHOSO service pods in that namespace.
  4. Configure the NodeNetworkConfigurationPolicy CR for each namespace.
  5. Create NetworkAttachmentDefinition (net-attach-def) CRs for each namespace.
  6. Create IPAddressPool and L2Advertisement CRs for each namespace.
  7. Create NetConfig CRs for each namespace.
  8. Create OpenStackControlPlane CRs for each namespace.
  9. Create OpenStackDataPlaneNodeSet and OpenStackDataPlaneDeployment CRs for each namespace. For more information, see Creating the data plane in Deploying Red Hat OpenStack Services on OpenShift.

1.1. Prerequisites

  • An operational RHOCP cluster, version 4.18, with sufficient resources to accommodate the additional hosted control planes and the additional resource consumption. For the RHOCP system requirements, see Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  • The oc command line tool is installed on your workstation.
  • You are logged in to the RHOCP cluster as a user with cluster-admin privileges.
  • The OpenStack Operator (openstack-operator) is installed on the RHOCP cluster. For more information, see Installing and preparing the OpenStack Operator.
  • The RHOCP cluster is prepared for the multiple RHOSO environments:

    • Optional: You have configured node selectors and labels on nodes to dedicate nodes to specific control plane and data plane pods for each RHOSO cloud namespace. For more information, see Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment.

      Note

      If you have not separated the pods for each namespace by using node selectors and labels, then the control plane and data plane pods for each namespace might be scheduled on the same RHOCP worker nodes.

    • You have created the storage class for the RHOCP cluster. All namespaces use this storage class and its underlying persistent volumes (PVs) by default. You can create separate storage classes and PVs for each namespace if required. For more information, see Creating a storage class.
  • You are using dedicated bare-metal resources for each RHOSO environment.

To plan the isolated RHOSO networks for each namespace, you must complete the following tasks:

  1. Review the minimum network requirements for a RHOSO environment. For more information about the RHOSO network requirements, see RHOCP network requirements and Planning your networks in Planning your deployment.
  2. Plan the network topology to support multiple RHOSO environments on the same RHOCP cluster, and plan the RHOSO networks for each namespace.
  3. Ensure that RHOSO-dedicated NICs are present on all RHOCP worker nodes, one for each namespace, to reinforce segregation between the RHOSO environments. These interfaces use VLANs for further network isolation.
  4. Ensure that there is connectivity between the namespace interfaces and the Compute nodes for that namespace.
  5. Ensure that VLAN IDs and subnets for each namespace do not overlap.
  6. Plan the networks for each RHOSO environment for each namespace. Each namespace must have unique network IP addresses for all networks. Do not overlap IP addresses and ranges between the namespaces. For more information about the required network values, see the "Default RHOSO networks" table in Default Red Hat OpenStack Services on OpenShift networks in Deploying Red Hat OpenStack Services on OpenShift.

To deploy multiple Red Hat OpenStack Services on OpenShift (RHOSO) environments on a Red Hat OpenShift Container Platform (RHOCP) cluster, create a namespace for each RHOSO environment and provide secure access to the RHOSO service pods for each RHOSO environment.

2.1. Creating the openstack namespace

You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.

Prerequisites

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

Procedure

  1. Create the openstack project for the deployed RHOSO environment:

    $ oc new-project openstack
  2. Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators:

    $ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq
    {
      "kubernetes.io/metadata.name": "openstack",
      "pod-security.kubernetes.io/enforce": "privileged",
      "security.openshift.io/scc.podSecurityLabelSync": "false"
    }

    If the security context constraint (SCC) is not "privileged", use the following commands to change it:

    $ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite
    $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
  3. Optional: To remove the need to specify the namespace when executing commands on the openstack namespace, set the default namespace to openstack:

    $ oc project openstack

You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret CR with the required password formats for each service.

For an example Secret CR that generates the required passwords and fernet key for you, see Example Secret CR for secure access to the RHOSO service pods.

Warning

You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.

Prerequisites

  • You have installed python3-cryptography.

Procedure

  1. Create a Secret CR on your workstation, for example, openstack_service_secret.yaml.
  2. Add the following initial configuration to openstack_service_secret.yaml:

    apiVersion: v1
    data:
      AdminPassword: <base64_password>
      AodhPassword: <base64_password>
      BarbicanPassword: <base64_password>
      BarbicanSimpleCryptoKEK: <base64_fernet_key>
      CeilometerPassword: <base64_password>
      CinderPassword: <base64_password>
      DbRootPassword: <base64_password>
      DesignatePassword: <base64_password>
      GlancePassword: <base64_password>
      HeatAuthEncryptionKey: <base64_password>
      HeatPassword: <base64_password>
      IronicInspectorPassword: <base64_password>
      IronicPassword: <base64_password>
      ManilaPassword: <base64_password>
      MetadataSecret: <base64_password>
      NeutronPassword: <base64_password>
      NovaPassword: <base64_password>
      OctaviaPassword: <base64_password>
      PlacementPassword: <base64_password>
      SwiftPassword: <base64_password>
    kind: Secret
    metadata:
      name: osp-secret
      namespace: openstack
    type: Opaque
    • Replace <base64_password> with a 32-character key that is base64 encoded.

      Note

      The HeatAuthEncryptionKey password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that the HeatAuthEncryptionKey password remains at length 32.

      You can use the following command to manually generate a base64 encoded password:

      $ echo -n <password> | base64

      Alternatively, if you are using a Linux workstation and you are generating the Secret CR by using a Bash command such as cat, you can replace <base64_password> with the following command to auto-generate random passwords for each service:

      $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
    • Replace the <base64_fernet_key> with a base64 encoded fernet key. You can use the following command to manually generate it:

      $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
  3. Create the Secret CR in the cluster:

    $ oc create -f openstack_service_secret.yaml -n openstack
  4. Verify that the Secret CR is created:

    $ oc describe secret osp-secret -n openstack

You must create a Secret custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.

If you are using a Linux workstation, you can create a Secret CR file called openstack_service_secret.yaml by using the following Bash cat command that generates the required passwords and fernet key for you:

$ cat <<EOF > openstack_service_secret.yaml
apiVersion: v1
data:
  AdminPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  AodhPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanSimpleCryptoKEK: $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
  CeilometerPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  CinderPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DbRootPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DesignatePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  GlancePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatAuthEncryptionKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicInspectorPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  ManilaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  MetadataSecret: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NeutronPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaHeartbeatKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  PlacementPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  SwiftPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
kind: Secret
metadata:
  name: osp-secret
  namespace: openstack
type: Opaque
EOF

To prepare the Red Hat OpenShift Container Platform (RHOCP) cluster for your multiple Red Hat OpenStack Services on OpenShift (RHOSO) environments, you must configure the RHOCP networks on your RHOCP cluster.

The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment:

  • Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
  • External network: (optional) used when required for your environment. For example, you might create an external network for any of the following purposes:

    • To provide virtual machine instances with Internet access.
    • To create flat provider networks that are separate from the control plane.
    • To configure VLAN provider networks on a separate bridge from the control plane.
    • To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
  • Internal API network: used for internal communication between RHOSO components.
  • Storage network: used for block storage, RBD, NFS, FC, and iSCSI.
  • Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.
  • Octavia controller network: (optional) used to connect Load-balancing service (octavia) controllers running in the control plane.
  • Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data.

    Note

    For more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:

The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.

Note

By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.

Expand
Table 3.1. Default RHOSO networks
Network nameCIDRNetConfig allocationRangeMetalLB IPAddressPool rangenet-attach-def ipam rangeOCP worker nncp range

ctlplane

192.168.122.0/24

192.168.122.100 - 192.168.122.250

192.168.122.80 - 192.168.122.90

192.168.122.30 - 192.168.122.70

192.168.122.10 - 192.168.122.20

external

10.0.0.0/24

10.0.0.100 - 10.0.0.250

n/a

n/a

n/a

internalapi

172.17.0.0/24

172.17.0.100 - 172.17.0.250

172.17.0.80 - 172.17.0.90

172.17.0.30 - 172.17.0.70

172.17.0.10 - 172.17.0.20

storage

172.18.0.0/24

172.18.0.100 - 172.18.0.250

n/a

172.18.0.30 - 172.18.0.70

172.18.0.10 - 172.18.0.20

tenant

172.19.0.0/24

172.19.0.100 - 172.19.0.250

n/a

172.19.0.30 - 172.19.0.70

172.19.0.10 - 172.19.0.20

octavia

172.23.0.0/24

n/a

n/a

172.23.0.30 - 172.23.0.70

n/a

storageMgmt

172.20.0.0/24

172.20.0.100 - 172.20.0.250

n/a

172.20.0.30 - 172.20.0.70

172.20.0.10 - 172.20.0.20

The following table specifies the networks that establish connectivity to the fabric using eth2 and eth3 with different IP addresses per zone and rack and also a global bgpmainnet that is used as a source for the traffic:

Expand
Table 3.2. Zone connectivity
Network nameZone 0Zone 1Zone 2

BGP Net1 (eth2)

100.64.0.0/24

100.64.1.0

100.64.2.0

BGP Net2 (eth3)

100.65.0.0/24

100.65.1.0/24

100.65.2.0

Bgpmainnet (loopback)

99.99.0.0/24

99.99.1.0/24

99.99.2.0/24

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.

Important

The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.

Note

The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on project (tenant) networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:

The NodeNetworkConfigurationPolicy custom resource (CR) is a cluster-scoped network configuration resource, therefore, it is not allocated to a namespace when created. Create a NodeNetworkConfigurationPolicy (nncp) CR for each namespace to configure the interfaces for the isolated network for the namespace in RHOCP cluster. The following procedure creates an nncp CR file for a single namespace. Repeat the procedure for each of the namespaces you created, using the nncp ranges you planned for that namespace.

Note

NodeNetworkConfigurationPolicy CRs are cluster-scoped resources, therefore you must ensure their names do not conflict with one another.

Procedure

  1. Create a NodeNetworkConfigurationPolicy (nncp) CR file on your workstation, for example, openstack-nncp.yaml.
  2. Retrieve the names of the worker nodes in the RHOCP cluster:

    $ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
  3. Discover the network configuration:

    $ oc get nns/<worker_node> -o yaml | more
    • Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1. Repeat this step for each worker node.
  4. In the nncp CR file, configure the interfaces for each isolated network for each namespace on each worker node in the RHOCP cluster.

    In the following example, the nncp CR configures the enp6s0 interface for a single namespace, osp_ns_1, on worker node 1, osp-enp6s0-worker-1, to use VLAN interfaces with IPv4 addresses for network isolation.

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: osp-enp6s0-worker-1
    spec:
      desiredState:
        interfaces:
        - description: internalapi vlan interface osp_ns_1
          ipv4:
            address:
            - ip: 172.17.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: internalapi_1
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 20
            reorder-headers: true
        - description: storage vlan interface osp_ns_1
          ipv4:
            address:
            - ip: 172.18.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: storage_1
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 21
            reorder-headers: true
        - description: tenant vlan interface osp_ns_1
          ipv4:
            address:
            - ip: 172.19.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: tenant_1
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 22
            reorder-headers: true
        - description: ctlplane interface osp_ns_1
          mtu: 1500
          name: enp6s0
          state: up
          type: ethernet
        - bridge:
            options:
              stp:
                enabled: false
            port:
            - name: enp6s0
              vlan: {}
          description: linux-bridge over ctlplane interface osp_ns_1
          ipv4:
            address:
            - ip: 192.168.122.11
              prefix-length: "24"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: ospbr_1
          state: up
          type: linux-bridge
        - description: octavia vlan interface osp_ns_1
          name: enp6s0.24
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 24
        - bridge:
            options:
              stp:
                enabled: false
            port:
            - name: enp6s0.24
          description: Configuring bridge octbr osp_ns_1
          mtu: 1500
          name: octbr_1
          state: up
          type: linux-bridge
      nodeSelector:
        kubernetes.io/hostname: worker-1
        node-role.kubernetes.io/worker: ""
  5. Create the nncp CR in the cluster:

    $ oc apply -f openstack-nncp.yaml
  6. Verify that the nncp CR is created:

    $ oc get nncp -w
    NAME                        STATUS        REASON
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Available     SuccessfullyConfigured

The NetworkAttachmentDefinition custom resource (CR) is a namespace-scoped resource. Create a NetworkAttachmentDefinition (net-attach-def) CR for each isolated network to attach the service pods to the networks. The following procedure creates a net-attach-def CR file for a single namespace. Repeat the procedure for each of the namespaces you created, using the ipam ranges you planned for that namespace.

Procedure

  1. Create a NetworkAttachmentDefinition (net-attach-def) CR file on your workstation, for example, openstack-net-attach-def.yaml.
  2. In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for each isolated network to attach a service deployment pod to the network. The following examples create a NetworkAttachmentDefinition resource for the internalapi, storage, ctlplane, and tenant networks of type macvlan, and a NetworkAttachmentDefinition resource for octavia, the load-balancing management network, of type bridge:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: internalapi
      namespace: osp_ns_1
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "internalapi",
          "type": "macvlan",
          "master": "internalapi_1",
          "ipam": {
            "type": "whereabouts",
            "range": "172.17.0.0/24",
            "range_start": "172.17.0.30",
            "range_end": "172.17.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: ctlplane
      namespace: osp_ns_1
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "ctlplane",
          "type": "macvlan",
          "master": "ospbr_1",
          "ipam": {
            "type": "whereabouts",
            "range": "192.168.122.0/24",
            "range_start": "192.168.122.30",
            "range_end": "192.168.122.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: storage
      namespace: osp_ns_1
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "storage",
          "type": "macvlan",
          "master": "storage_1",
          "ipam": {
            "type": "whereabouts",
            "range": "172.18.0.0/24",
            "range_start": "172.18.0.30",
            "range_end": "172.18.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: tenant
      namespace: osp_ns_1
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "tenant",
          "type": "macvlan",
          "master": "tenant_1",
          "ipam": {
            "type": "whereabouts",
            "range": "172.19.0.0/24",
            "range_start": "172.19.0.30",
            "range_end": "172.19.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      labels:
        osp/net: octavia
      name: octavia
      namespace: osp_ns_1
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "octavia",
          "type": "bridge",
          "bridge": "octbr_1",
          "ipam": {
            "type": "whereabouts",
            "range": "172.23.0.0/24",
            "range_start": "172.23.0.30",
            "range_end": "172.23.0.70",
            "routes": [
               {
                 "dst": "172.24.0.0/16",
                 "gw" : "172.23.0.150"
               }
             ]
          }
        }
    • metadata.namespace: The namespace where the services are deployed.
    • "master": The node interface name associated with the network, as defined in the nncp CR.
    • "ipam": The whereabouts CNI IPAM plug-in assigns IPs to the created pods from the range .30 - .70.
    • "range_start" - "range_end": The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange.
    • The octavia network attachment is required to connect pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator.
  3. Create the NetworkAttachmentDefinition CR in the cluster:

    $ oc apply -f openstack-net-attach-def.yaml
  4. Verify that the NetworkAttachmentDefinition CR is created:

    $ oc get net-attach-def -n openstack

The IPAddressPool and L2Advertisement custom resources (CRs) are namespace-scoped resources that must be created within the metallb-system namespace for each of the RHOSO environments. You must create an L2Advertisement CR to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool CR to configure which IPs can be used as VIPs. In Layer 2 mode, one node assumes the responsibility of advertising a service to the local network. The following procedure creates a L2Advertisement CR file and an IPAddressPool CR file for a single namespace. Repeat the procedure for each of the namespaces you created, using the MetalLB IPAddressPool ranges you planned for that namespace.

Note

IPAddressPool and L2Advertisement CRs are MetalLB resources that must exist in the metallb-system namespace. The resources for each namespace must use the same metallb-system namespace, therefore you must ensure their names do not conflict with one another.

Procedure

  1. Create an IPAddressPool CR file on your workstation, for example, ipaddresspools_osp_ns_1.yaml.
  2. In the IPAddressPool CR file, configure an IPAddressPool resource on the isolated network to specify the IP address ranges over which MetalLB has authority:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: internalapi_1
      namespace: metallb-system
    spec:
      addresses:
        - 172.17.0.80-172.17.0.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: ctlplane_1
    spec:
      addresses:
        - 192.168.122.80-192.168.122.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: storage_1
    spec:
      addresses:
        - 172.18.0.80-172.18.0.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: tenant_1
    spec:
      addresses:
        - 172.19.0.80-172.19.0.90
      autoAssign: true
      avoidBuggyIPs: false
    • spec.addresses: The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange.

    For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.

  3. Create the IPAddressPool CR in the cluster:

    $ oc apply -f ipaddresspools_osp_ns_1.yaml
  4. Verify that the IPAddressPool CR is created:

    $ oc describe -n metallb-system IPAddressPool
  5. Create a L2Advertisement CR file on your workstation, for example, l2advertisement_osp_ns_1.yaml.
  6. In the L2Advertisement CR file, configure L2Advertisement CRs to define which node advertises a service to the local network. Create one L2Advertisement resource for each network.

    In the following example, each L2Advertisement CR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:

    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: internalapi_1
      namespace: metallb-system
    spec:
      ipAddressPools:
      - internalapi_1
      interfaces:
      - internalapi_1
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: ctlplane_1
      namespace: metallb-system
    spec:
      ipAddressPools:
      - ctlplane_1
      interfaces:
      - ospbr_1
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: storage_1
      namespace: metallb-system
    spec:
      ipAddressPools:
      - storage_1
      interfaces:
      - storage_1
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: tenant_1
      namespace: metallb-system
    spec:
      ipAddressPools:
      - tenant_1
      interfaces:
      - tenant_1
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    • spec.interfaces: The interface where the VIPs requested from the VLAN address pool are announced.

    For information about how to configure the other L2Advertisement resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.

  7. Create the L2Advertisement CRs in the cluster:

    $ oc apply -f l2advertisement_osp_ns_1.yaml
  8. Verify that the L2Advertisement CRs are created:

    $ oc get -n metallb-system L2Advertisement
    NAME            IPADDRESSPOOLS    IPADDRESSPOOL SELECTORS   INTERFACES
    ctlplane_1      ["ctlplane_1"]                              ["ospbr_1"]
    internalapi_1   ["internalapi_1"]                           ["internalapi_1"]
    storage_1       ["storage_1"]                               ["storage_1"]
    tenant_1        ["tenant_1"]                                ["tenant_1"]
  9. If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.

    1. Check the network back end used by your cluster:

      $ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
    2. If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:

      $ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge

To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as internalapi, storage, and external. Each network definition must include the IP address assignment. The following procedure creates a NetConfig CR file for a the osp_ns_1 namespace. Repeat the procedure for each of the namespaces you created, using the allocationRange ranges you planned for that namespace.

Procedure

  1. Create a file named netconfig_osp_ns_1.yaml on your workstation.
  2. Add the following configuration to netconfig_osp_ns_1.yaml to create the NetConfig CR:

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: openstacknetconfig
      namespace: osp_ns_1
  3. In the netconfig_osp_ns_1.yaml file, define the topology for each data plane network. The following example creates isolated networks for the data plane:

    spec:
      networks:
      - name: ctlplane
        dnsDomain: ctlplane_1.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 192.168.122.120
            start: 192.168.122.100
          - end: 192.168.122.200
            start: 192.168.122.150
          cidr: 192.168.122.0/24
          gateway: 192.168.122.1
      - name: internalapi
        dnsDomain: internalapi_1.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.17.0.250
            start: 172.17.0.100
          excludeAddresses:
          - 172.17.0.10
          - 172.17.0.12
          cidr: 172.17.0.0/24
          vlan: 20
      - name: external
        dnsDomain: external_1.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 10.0.0.250
            start: 10.0.0.100
          cidr: 10.0.0.0/24
          gateway: 10.0.0.1
      - name: storage
        dnsDomain: storage_1.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.18.0.250
            start: 172.18.0.100
          cidr: 172.18.0.0/24
          vlan: 21
      - name: tenant
        dnsDomain: tenant_1.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.19.0.250
            start: 172.19.0.100
          cidr: 172.19.0.0/24
          vlan: 22
    • spec.networks.name: The name of the network, for example, ctlplane.
    • spec.networks.subnets: The IPv4 subnet specification.
    • spec.networks.subnets.name: The name of the subnet, for example, subnet1.
    • spec.networks.subnets.allocationRanges: The NetConfig allocationRange. The allocationRange must not overlap with the MetalLB IPAddressPool range and the IP address pool range.
    • spec.networks.subnets.excludeAddresses: An optional list of IP addresses from the allocation range that must not be used by data plane nodes.
    • spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks.
  4. Save the netconfig_osp_ns_1.yaml definition file.
  5. Create the data plane network:

    $ oc create -f netconfig_osp_ns_1.yaml -n openstack
  6. To verify that the data plane network is created, view the openstacknetconfig resource:

    $ oc get netconfig/openstacknetconfig -n openstack

    If you see errors, check the underlying network-attach-definition and node network configuration policies:

    $ oc get network-attachment-definitions -n openstack
    $ oc get nncp

You create the control plane and the data plane for each namespace by following the guidance Deploying Red Hat OpenStack Services on OpenShift, and updating the network configuration for each namespace.

4.1. Creating a control plane for a namespace

You create an OpenStackControlPlane custom resource (CR) for each namespace and configure the load balancers with the networks for that namespace.

Procedure

  1. Create an OpenStackControlPlane CR for the osp_ns_1 namespace by following the Creating the control plane procedure in Deploying Red Hat OpenStack Services on OpenShift.
  2. Locate the load balancer configuration for each service and update it with the networks for that namespace:

      <service>:
        ...
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi_1
                      metallb.universe.tf/allow-shared-ip: internalapi_1
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
    • metallb.universe.tf/address-pool: The IpAddressPool to assign the IP address from to load balance the service.
    • metallb.universe.tf/allow-shared-ip: The IpAddressPool to use a shared IP address from for load balancing.

      Note

      You cannot share a VIP with services that use the same port, such as the multiple instances of rabbitmq.

    • metallb.universe.tf/loadBalancerIPs: Assigns a specific VIP from the IpAddressPool range for use when load balancing the service.
  3. Update the NIC mappings for the ovn service to ensure that the OVNController resources for the namespace are segregated from the other namespaces, by configuring the nicMappings for the datacentre network to correspond to the dedicated interface for the namespace:

    ...
        ovn:
          enabled: true
          template:
            ovnController:
              nicMappings:
                datacentre: ospbr_1
            ovnDBCluster:
              ovndbcluster-nb:
  4. Update the control plane:

    $ oc apply -f osp_ns_1_control_plane.yaml -n openstack
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n osp_ns_1
    NAME 						    STATUS 	  MESSAGE
    control-plane-ns-1 	Unknown 	Setup started

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  6. Verify that the service pods are running on the correct worker nodes.

4.2. Creating a data plane for a namespace

You create the OpenStackDataPlaneNodeSet custom resources (CRs) for each namespace and an OpenStackDataPlaneDeployment CR for each namespace.

Procedure

  1. Create an OpenStackDataPlaneNodeSet CR for the osp_ns_1 namespace by following the Creating the data plane procedures in Deploying Red Hat OpenStack Services on OpenShift.
  2. Locate the nodes section and update the hostName and the ansibleVars.fqdn_internal_api fields with unique names for each node that indicate the namespace the node belongs to:

      nodes:
        edpm-compute-0:
          hostName: edpm_1-compute-0
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm_1-compute-0.example.com
  3. Update the IP addresses of the networks defined for each node defined for the namespace:

      nodes:
        edpm-compute-0:
          ...
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          ansible:
            ansibleHost: 192.168.122.100
            ...
    • fixedIP: Set to the IP address for the network for the osp_ns_1 namespace.
  4. Save the updated OpenStackDataPlaneNodeSet CR definition file.
  5. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f osp_ns_1_data_plane.yaml
  6. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset data-plane-ns-1 --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

  7. Create a file on your workstation to define the new OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. The name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character.
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR unique and descriptive names that indicate the purpose of the modified node set.

  8. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - <nodeSet_name>
  9. Save the OpenStackDataPlaneDeployment CR deployment file.
  10. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f osp_ns_1_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
  11. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    data-plane-ns-1     True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    data-plane-ns-1     True     NodeSet Ready

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top