Deploying a RHOSO environment with distributed zones


Red Hat OpenStack Services on OpenShift 18.0

Deploying a Red Hat OpenStack Services on OpenShift environment with distributed zones

OpenStack Documentation Team

Abstract

Learn how to create a Red Hat OpenStack Services on OpenShift environment with distributed zones.

Providing feedback on Red Hat documentation

We appreciate your feedback. Tell us how we can improve the documentation.

To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.

Procedure

  1. Log in to the Red Hat Atlassian Jira.
  2. Click the following link to open a Create Issue page: Create issue
  3. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
  4. Click Create.
  5. Review the details of the bug you created.

You can deploy the Red Hat OpenStack Services on OpenShift (RHOSO) environment across distributed zones. Distributed zones are failure domains that are located in distributed low-latency L3-connected racks, rows, rooms, and data centers. You can deploy the RHOSO control plane across multiple Red Hat OpenShift Container Platform (RHOCP) cluster nodes that are located in the distributed zones, and you can deploy the RHOSO data plane across the same distributed zones.

RHOSO distributed zones architecture

A RHOSO environment with distributed zones is built on a routed spine-leaf network topology. The topology of a distributed control plane environment includes three RHOCP zones. Each zone has at least one worker node that hosts the control plane services and one Compute node.

To create a RHOSO environment with distributed zones, you must complete the following tasks:

  1. Install OpenStack Operator (openstack-operator) on an operational RHOCP cluster.
  2. Provide secure access to the RHOSO services.
  3. Create and configure the control plane network for dynamic routing with border gateway protocol (BGP).
  4. Create and configure the data plane networks for dynamic routing with BGP.
  5. Create the distributed control plane for your environment.
  6. Create and configure the distributed data plane nodes.

You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster.

Note

You cannot use the provisioning network in a routed spine-leaf network environment. You must configure provisioning to use the RHOCP machine network. The machine network is the network used by RHOCP cluster nodes to communicate with each other. The machine network is also the subnet that includes the API and Ingress VIPs. You configure the machine network by specifying the IP address blocks for the nodes that form the cluster in the machineNetwork field of the RHOCP install-config.yaml file. For more details about the RHOCP machine network, see the following RHOCP resources:

To plan and prepare to deploy a distributed zone environment, you must understand the requirements and limitations for Red Hat OpenShift Container Platform (RHOCP) clusters that span multiple sites. For more information, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites (Data Centers/Regions).

1.1.1. RHOCP requirements

Your RHOCP cluster must comply with the minimum RHOCP hardware, network, software and storage requirements that are detailed in Planning your deployment. In addition, to host a distributed zone environment, your RHOCP cluster must comply with the following requirements:

  • The RHOCP cluster must not be a compact cluster.
  • Each zone requires a low-latency interconnect:

    • Etcd for RHOCP requires a Round Trip Time (RTT) less than 15ms.
  • The network equipment must support the BGP protocol and be compatible with FRRouting (FRR).
  • The MetalLB Operator is configured to integrate with FRR-K8s. For more information, see Configuring the integration of MetalLB and FRR-K8s.
  • The following Operators are installed on the RHOCP cluster:

    • The Self Node Remediation (SNR) Operator. For information, see Using Self Node remediation in the Workload Availability for Red Hat OpenShift Remediation, fencing, and maintenance guide.
    • The Node Health Check Operator. For information, see Remediating Nodes with Node Health Checks in the Workload Availability for Red Hat OpenShift Remediation, fencing, and maintenance guide.

1.1.2. Storage requirements

  • The RHOCP storage class is defined, and has access to persistent volumes of type ReadWriteOnce.

    Note

    If you use Logical Volume Manager (LVM) storage, the attached volume is not mounted on a new node in the event of a node failure. LVM storage only provides local volumes, and the volume remains assigned to the failed node. This prevents the SNR Operator from automatically rescheduling pods with LVMS PVCs. Therefore, if you use LVMs for storage, you must detach volumes after non-graceful node shutdown. For more information, see Detach volumes after non-graceful node shutdown.

  • For Red Hat Ceph Storage, a redundant Red Hat Ceph Storage cluster is available in each zone.
  • For third-party storage, local and remote storage array access is configured.
Important

This configuration is Technology Preview when the following storage protocols are used with the Block Storage service, and therefore is not fully supported by Red Hat. It should be used only for testing, and should not be deployed in a production environment:

  • Fibre Channel
  • NFS

For more information, see Technology Preview.

Local access storage configuration

Storage services are co-located with their storage arrays in the same availability zone (AZ).

  • Each AZ contains its own storage array and dedicated storage network.
  • Service pods, such as cinder-volume or manila-share, are deployed on worker nodes within the same AZ as their target storage array.
  • Compute nodes must be on the same storage network to access local storage resources.

    AZ1 setup example

    • Storage array: 10.1.0.6.
    • Storage network: 10.1.0.0/24.
    • The manila-share pod is deployed on the AZ1 worker node with access to 10.1.0.0/24.
    • Compute nodes are connected to the same network for direct array access.
Remote access storage configuration

Storage arrays in each zone must be accessible from worker nodes in other zones to enable cross-AZ operations, such as image management or volume operations.

Network implementation requirements

  • iSCSI: Configure IP routing between AZ storage networks to enable remote access.
  • Fibre Channel: Configure FC switch zoning to allow cross-AZ access and maintain the same local and remote access patterns as iSCSI.

Use cases

  • Image management example: The Image service (glance) pod in AZ1 requires access to cinder-volume services across AZs so that you can upload images to the local glance store in AZ1 and copy the images to remote glance stores in AZ2 or AZ3.

    Note

    When you use the Block Storage service as a back end for the Image service, volume creation from images can be optimized within each zone’s storage pool. The system uses back-end-assisted cloning instead of downloading image data, which significantly improves performance for boot-from-volume instances and volume creation. This optimization works when the image volume and destination volume are in the same storage pool. For cross-zone operations where volumes are created in different pools, the system uses the traditional download method. For more information, see Volume-from-image optimization with Block Storage back ends.

  • Volume operations example: Retype volumes between different AZs.
  • Cross-AZ share access example: Grant Compute service (nova) instances in AZ2 access to a Shared File Systems service (manila) share hosted in AZ1. Because network latency between AZs might impact storage performance, administrator policy determines whether to restrict access to the local AZ only for better performance or allow remote AZ access for greater flexibility.

Chapter 2. Installing and preparing the Operators

You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator (openstack-operator) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP web console. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster.

2.1. Prerequisites

  • An operational RHOCP cluster, version 4.18. For the RHOCP system requirements, see Planning the infrastructure for distributed zones.
  • The oc command line tool is installed on your workstation.
  • You are logged in to the RHOCP cluster as a user with cluster-admin privileges.

You can use the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator (openstack-operator) on your RHOCP cluster from the OperatorHub. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource, OpenStack, to start the OpenStack Operator on your cluster.

Procedure

  1. Log in to the RHOCP web console as a user with cluster-admin permissions.
  2. Select Operators → OperatorHub.
  3. In the Filter by keyword field, type OpenStack.
  4. Click the OpenStack Operator tile with the Red Hat source label.
  5. Read the information about the Operator and click Install.
  6. On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list.
  7. On the Install Operator page, select "Manual" from the Update approval list. For information about how to manually approve a pending Operator update, see Manually approving a pending Operator update in the RHOCP Operators guide.
  8. Click Install to make the Operator available to the openstack-operators namespace. The OpenStack Operator is installed when the Status is Succeeded.
  9. Click Create OpenStack to open the Create OpenStack page.
  10. On the Create OpenStack page, click Create to create an instance of the OpenStack Operator initialization resource. The OpenStack Operator is ready to use when the Status of the openstack instance is Conditions: Ready.

You can use the Red Hat OpenShift Container Platform (RHOCP) CLI (oc) to install the OpenStack Operator (openstack-operator) on your RHOCP cluster from the OperatorHub.

To install the OpenStack Operator by using the CLI, you create the openstack-operators namespace for the Red Hat OpenStack Platform (RHOSP) service Operators. You then create the OperatorGroup and Subscription custom resources (CRs) within the namespace. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource, OpenStack, to start the OpenStack Operator on your cluster.

Procedure

  1. Create the openstack-operators namespace for the RHOSP operators:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openstack-operators
    spec:
      finalizers:
      - kubernetes
    EOF
  2. Create the OperatorGroup CR in the openstack-operators namespace:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openstack
      namespace: openstack-operators
    EOF
  3. Create the Subscription CR that subscribes to openstack-operator:

    $ cat << EOF| oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: openstack-operator
      namespace: openstack-operators
    spec:
      name: openstack-operator
      channel: stable-v1.0
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Manual
    EOF
  4. Wait for the install plan to be created:

    $ oc get installplan -n openstack-operators -o json | jq -r '.items[] | select(.spec.approval=="Manual" and .spec.approved==false) | .metadata.name' | head -n1
  5. Approve the install plan:

    $ oc patch installplan <install_plan_name> -n openstack-operators --type merge -p '{"spec":{"approved":true}}'
  6. Verify that the OpenStack Operator is installed:

    $ oc wait csv -n openstack-operators \
     -l operators.coreos.com/openstack-operator.openstack-operators="" \
     --for jsonpath='{.status.phase}'=Succeeded
  7. Create an instance of the openstack-operator:

    $ cat << EOF | oc apply -f -
    apiVersion: operator.openstack.org/v1beta1
    kind: OpenStack
    metadata:
      name: openstack
      namespace: openstack-operators
    EOF
  8. Confirm that the Openstack Operator is deployed:

    $ oc wait openstack/openstack -n openstack-operators --for condition=Ready --timeout=500s

Additional resources

You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.

Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment.

You can create new labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services:

  • The cinder-scheduler service is a very light service with low memory, disk, network, and CPU usage.
  • The cinder-api service has high network usage due to resource listing requests.
  • The cinder-volume service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image.
  • The cinder-backup service has high memory, network, and CPU requirements.

Therefore, you can pin the cinder-api, cinder-volume, and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity.

Tip

Alternatively, you can create Topology CRs and use the topologyRef field in your OpenStackControlPlane CR to control service pod placement after your RHOCP cluster has been prepared. For more information, see Controlling service pod placement with Topology CRs.

3.2. Creating the openstack namespace

You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.

Prerequisites

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

Procedure

  1. Create the openstack project for the deployed RHOSO environment:

    $ oc new-project openstack
  2. Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators:

    $ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq
    {
      "kubernetes.io/metadata.name": "openstack",
      "pod-security.kubernetes.io/enforce": "privileged",
      "security.openshift.io/scc.podSecurityLabelSync": "false"
    }

    If the security context constraint (SCC) is not "privileged", use the following commands to change it:

    $ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite
    $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
  3. Optional: To remove the need to specify the namespace when executing commands on the openstack namespace, set the default namespace to openstack:

    $ oc project openstack

You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret CR with the required password formats for each service.

For an example Secret CR that generates the required passwords and fernet key for you, see Example Secret CR for secure access to the RHOSO service pods.

Warning

You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.

Prerequisites

  • You have installed python3-cryptography.

Procedure

  1. Create a Secret CR on your workstation, for example, openstack_service_secret.yaml.
  2. Add the following initial configuration to openstack_service_secret.yaml:

    apiVersion: v1
    data:
      AdminPassword: <base64_password>
      AodhPassword: <base64_password>
      BarbicanPassword: <base64_password>
      BarbicanSimpleCryptoKEK: <base64_fernet_key>
      CeilometerPassword: <base64_password>
      CinderPassword: <base64_password>
      DbRootPassword: <base64_password>
      DesignatePassword: <base64_password>
      GlancePassword: <base64_password>
      HeatAuthEncryptionKey: <base64_password>
      HeatPassword: <base64_password>
      IronicInspectorPassword: <base64_password>
      IronicPassword: <base64_password>
      ManilaPassword: <base64_password>
      MetadataSecret: <base64_password>
      NeutronPassword: <base64_password>
      NovaPassword: <base64_password>
      OctaviaPassword: <base64_password>
      PlacementPassword: <base64_password>
      SwiftPassword: <base64_password>
    kind: Secret
    metadata:
      name: osp-secret
      namespace: openstack
    type: Opaque
    • Replace <base64_password> with a 32-character key that is base64 encoded.

      Note

      The HeatAuthEncryptionKey password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that the HeatAuthEncryptionKey password remains at length 32.

      You can use the following command to manually generate a base64 encoded password:

      $ echo -n <password> | base64

      Alternatively, if you are using a Linux workstation and you are generating the Secret CR by using a Bash command such as cat, you can replace <base64_password> with the following command to auto-generate random passwords for each service:

      $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
    • Replace the <base64_fernet_key> with a base64 encoded fernet key. You can use the following command to manually generate it:

      $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
  3. Create the Secret CR in the cluster:

    $ oc create -f openstack_service_secret.yaml -n openstack
  4. Verify that the Secret CR is created:

    $ oc describe secret osp-secret -n openstack

You must create a Secret custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.

If you are using a Linux workstation, you can create a Secret CR file called openstack_service_secret.yaml by using the following Bash cat command that generates the required passwords and fernet key for you:

$ cat <<EOF > openstack_service_secret.yaml
apiVersion: v1
data:
  AdminPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  AodhPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanSimpleCryptoKEK: $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
  CeilometerPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  CinderPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DbRootPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DesignatePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  GlancePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatAuthEncryptionKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicInspectorPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  ManilaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  MetadataSecret: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NeutronPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaHeartbeatKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  PlacementPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  SwiftPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
kind: Secret
metadata:
  name: osp-secret
  namespace: openstack
type: Opaque
EOF

To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment with distributed zones, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster to use dynamic routing with border gateway protocol (BGP).

Note

The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on project (tenant) networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:

A typical Red Hat OpenStack Services on OpenShift (RHOSO) deployment features a variety of physical data center networks.

  • BGP networks:

    • bgpnet0 and bgpnet1: define the IP addresses that connect the data plane nodes with their neighbor, top-of-rack and leaf routers.
    • bgpmainnet: defines the IP addresses that are configured on the data plane node loopback interface and advertised using BGP to communicate with the spines and leafs.
  • Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
  • Internal API network: used for internal communication between RHOSO components.
  • Octavia controller network: (optional) used to connect Load-balancing service (octavia) controllers running in the control plane.
  • Storage network: used for block storage, RBD, NFS, FC, and iSCSI.

    Note

    On RHOSO data plane nodes, you can host services on the bgpmainnet network that you would normally host in a non-dynamic routing environment on the isolated storage and storage management networks. However, any services whose data plane storage traffic you want to isolate from the bgpmainnet network, you can host on the storage and storage management networks.

  • Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data.
  • Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.

    Note

    For more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:

The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.

Note

To ensure that the ctlplane network can be accessed externally, the MetalLB IPAddressPool and NetworkAttachmentDefinition ipam ranges for the ctlplane network should be on a network that is advertised by BGP. The network that the data plane nodes can be reached from is the network that you use in the OpenStackDataPlaneNodeSet custom resources (CRs).

Expand
Table 4.1. Default RHOSO networks for BGP

Network name

CIDR

NetConfig allocationRange

MetalLB IPAddressPool range

net-attach-def ipam range

bgpmainnet

N/A

172.30.0.2 - 172.30.1.2

N/A

N/A

bgpnet0

N/A

100.64.1.0 - 100.64.2.0

N/A

N/A

bgpnet1

N/A

100.65.1.0 - 100.65.2.0

N/A

N/A

ctlplane

192.168.122.0/24

192.168.122.100 - 192.168.122.250

192.168.122.80 - 192.168.122.90

192.168.122.30 - 192.168.122.70

internalapi

172.17.0.0/24

N/A

172.17.0.80 - 172.17.0.90

172.17.0.30 - 172.17.0.70

octavia

172.23.0.0/24

N/A

N/A

172.23.0.30 - 172.23.0.70

storage

172.18.0.0/24

N/A

N/A

172.18.0.30 - 172.18.0.70

storageMgmt

172.20.0.0/24

N/A

N/A

172.20.0.30 - 172.20.0.70

tenant

172.19.0.0/24

172.19.0.100 - 172.19.0.250

N/A

172.19.0.30 - 172.19.0.70

The topology of a distributed control plane environment includes three Red Hat OpenShift Container Platform (RHOCP) zones. Each zone has at least one worker node that hosts the control plane services and one Compute node. Each node has two network interfaces, eth2 and eth3, that are configured with the IP addresses for the subnets of the zone in which the node is located.

An additional IP address from the bgpmainnet network is configured on the loopback interface of each node. This is the IP address that each node uses to communicate with each other. The BGP NIC 1 and NIC 2 exist only for the purpose of establishing L2 connectivity within the boundaries of a zone. The bgpmainnet network is defined as 99.99.0.0/16 but has subnets for each zone.

The following table specifies the networks that establish connectivity to the fabric using eth2 and eth3 with different IP addresses per zone and rack and also a global bgpmainnet that is used as a source for the traffic.

Expand
Table 4.2. Zone connectivity - IPv4 addresses
Network nameZone 0Zone 1Zone 2

BGP Net1 (eth2)

100.64.0.0/24

100.64.1.0

100.64.2.0

BGP Net2 (eth3)

100.65.0.0/24

100.65.1.0/24

100.65.2.0

Bgpmainnet (loopback)

99.99.0.0/24

99.99.1.0/24

99.99.2.0/24

Expand
Table 4.3. Zone connectivity - IPv6 addresses
Network nameZone 0Zone 1Zone 2

BGP Net1 v6 (eth2)

2620:cf::100:64:0:0/112

2620:cf::100:64:1:0/112

2620:cf::100:64:2:0/112

BGP Net2 v6 (eth3)

2620:cf::100:65:0:0/112

2620:cf::100:65:1:0/112

2620:cf::100:65:2:0/112

Bgpmainnetv6 (loopback)

f00d:f00d:f00d:f00d:99:99:0:0/112

f00d:f00d:f00d:f00d:99:99:1:0/112

f00d:f00d:f00d:f00d:99:99:2:0/112

4.3. Preparing RHOCP for BGP networks on RHOSO

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.

Note

The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on project (tenant) networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:

In order for Red Hat OpenShift Container Platform (RHOCP) worker nodes to forward traffic based on the BGP advertisements they receive, you must disable the reverse path filters on the BGP interfaces of the RHOCP worker nodes that run RHOSO services.

Procedure

  1. Create a manifest file named tuned.yaml with content similar to the following:

    apiVersion: tuned.openshift.io/v1
    kind: Tuned
    metadata:
      name: default
      namespace: openshift-cluster-node-tuning-operator
    spec:
      profile:
      - data: |
          [main]
          summary=Optimize systems running OpenShift (provider specific parent profile)
          include=-provider-${f:exec:cat:/var/lib/ocp-tuned/provider},openshift
    
          [sysctl]
          net.ipv4.conf.enp8s0.rp_filter=0
          net.ipv4.conf.enp9s0.rp_filter=0
        name: openshift
      recommend:
      - match:
        - label: kubernetes.io/hostname
          value: worker-0
        - label: kubernetes.io/hostname
          value: worker-1
        - label: kubernetes.io/hostname
          value: worker-2
        - label: node-role.kubernetes.io/master
        operand:
          tunedConfig:
            reapply_sysctl: false
        priority: 15
        profile: openshift-no-reapply-sysctl
    status: {}
  2. Save the file and create the Tuned resource:

    $ oc create -f tuned.yaml

Create a NodeNetworkConfigurationPolicy (nncp) custom resource (CR) to configure the interfaces for each isolated network on each worker node in the Red Hat OpenShift Container Platform (RHOCP) cluster.

Procedure

  1. Create a NodeNetworkConfigurationPolicy (nncp) CR file on your workstation, for example, openstack-nncp-bgp.yaml.
  2. Retrieve the names of the worker nodes in the RHOCP cluster:

    $ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
  3. Discover the network configuration:

    $ oc get nns/<worker_node> -o yaml | more
    • Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1. Repeat this step for each worker node.
  4. In the nncp CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks for BGP.

    In the following example, the nncp CR configures the multiple unconnected bridges that you use to map the Red Hat OpenStack Services on OpenShift (RHOSO) networks. You use BGP interfaces to peer with the network fabric and establish connectivity. The loopback interface is configured with the BGP network source address, 99.99.0.x. You can optionally dedicate a NIC to the ctlplane network.

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      labels:
        osp/nncm-config-type: standard
      name: worker-0
      namespace: openstack
    spec:
      desiredState:
        dns-resolver:
          config:
            search: []
            server:
            - 192.168.122.1
        interfaces:
        - description: internalapi bridge
          mtu: 1500
          name: internalapi
          state: up
          type: linux-bridge
        - description: storage bridge
          mtu: 1500
          name: storage
          state: up
          type: linux-bridge
        - description: tenant bridge
          mtu: 1500
          name: tenant
          state: up
          type: linux-bridge
        - description: ctlplane bridge
          mtu: 1500
          name: ospbr
          state: up
          type: linux-bridge
        - description: BGP interface 1
          ipv4:
            address:
            - ip: 100.64.0.14
              prefix-length: "30"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: enp8s0
          state: up
          type: ethernet
        - description: BGP interface 2
          ipv4:
            address:
            - ip: 100.65.0.14
              prefix-length: "30"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: enp9s0
          state: up
          type: ethernet
        - description: loopback interface
          ipv4:
            address:
            - ip: 99.99.0.3
              prefix-length: "32"
            dhcp: false
            enabled: true
          mtu: 65536
          name: lo
          state: up
        route-rules:
          config: []
        routes:
          config:
          - destination: 99.99.0.0/16
            next-hop-address: 100.64.0.13
            next-hop-interface: enp8s0
            weight: 200
          - destination: 99.99.0.0/16
            next-hop-address: 100.65.0.13
            next-hop-interface: enp9s0
            weight: 200
      nodeSelector:
        kubernetes.io/hostname: worker-0
        node-role.kubernetes.io/worker: ""
  5. Create the nncp CR in the cluster:

    $ oc apply -f openstack-nncp-bgp.yaml
  6. Verify that the nncp CR is created:

    $ oc get nncp -w
    Sample output
    NAME       STATUS        REASON
    worker-0   Progressing   ConfigurationProgressing
    worker-0   Progressing   ConfigurationProgressing
    worker-0   Available     SuccessfullyConfigured

Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.

Procedure

  1. Create a NetworkAttachmentDefinition (net-attach-def) CR file on your workstation, for example, openstack-net-attach-def.yaml.
  2. In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for each isolated network to attach a service deployment pod to the network. The following example creates a NetworkAttachmentDefinition resource that uses a type bridge interface with specific gateway configurations and additional options:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      labels:
        osp/net: internalapi
        osp/net-attach-def-type: standard
      name: internalapi
      namespace: openstack
    spec:
      config: |
    	{
      	"cniVersion": "0.3.1",
      	"name": "internalapi",
      	"type": "bridge",
      	"isDefaultGateway": true,
      	"isGateway": true,
      	"forceAddress": false,
      	"ipMasq": true,
      	"hairpinMode": true,
      	"bridge": "internalapi",
      	"ipam": {
          "type": "whereabouts",
          "range": "172.17.0.0/24",
          "range_start": "172.17.0.30",
          "range_end": "172.17.0.70",
          "gateway": "172.17.0.1"
        }
      }
    • metadata.namespace: The namespace where the services are deployed.
    • "name": The node interface name associated with the network, as defined in the nncp CR.
    • "ipMasq": An optional field that, when set to true, enables IP masquerading. If the gateway does not have an IP address, ipMasq has no effect. The default value is false. Set "ipMasq": true when the data plane nodes do not have the necessary routes, or when the data plane nodes do not have connection to the control plane network before Free Range Routing (FRR) is configured.
    • "ipam": The whereabouts CNI IPAM plug-in assigns IPs to the created pods from the range .30 - .70.
    • "range_start" - "range_end": The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange.
  3. Create the NetworkAttachmentDefinition CR in the cluster:

    $ oc apply -f openstack-net-attach-def.yaml
  4. Verify that the NetworkAttachmentDefinition CR is created:

    $ oc get net-attach-def -n openstack

You must create pairs of BGPPeer custom resources (CRs) to define which leaf switch connects to the eth2 and eth3 interfaces on each node. For example, worker-0 has two BGPPeer CRs, one for leaf-0 and one for leaf-1. For information about BGP peers, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/ingress_and_load_balancing/load-balancing-with-metallb#nw-metallb-configure-bgppeer_configure-metallb-bgp-peers [Configuring a BGP peer].

You must also create IPAddressPool CRs to define the network ranges to be advertised, and a BGPAdvertisement CR that defines how the BGPPeer CRs are announced and links the IPAddressPool CRs to the BGPPeer CRs that receive the advertisements.

Procedure

  1. Create a BGPPeer CR file on your workstation, for example, bgppeers.yaml.
  2. Configure the pairs of BGPPeer CRs for each node to peer with. The following example configures two BGPPeer CRs for the worker-0 node, one for leaf-0 and one for leaf-1:

    apiVersion: metallb.io/v1beta2
    kind: BGPPeer
    metadata:
      name: bgp-peer-node-0-0
      namespace: metallb-system
    spec:
      myASN: 64999
      nodeSelectors:
      - matchExpressions:
    	- key: kubernetes.io/hostname
      	operator: In
      	values:
      	- worker-0
      password: r3dh4t1234
      peerASN: 64999
      peerAddress: 100.64.0.13
    ---
    apiVersion: metallb.io/v1beta2
    kind: BGPPeer
    metadata:
      name: bgp-peer-node-0-1
      namespace: metallb-system
    spec:
      myASN: 64999
      nodeSelectors:
      - matchExpressions:
    	- key: kubernetes.io/hostname
      	operator: In
      	values:
      	- worker-0
      password: r3dh4t1234
      peerASN: 64999
      peerAddress: 100.65.0.13
  3. Create the BGPPeer CRs:

    $ oc create -f bgppeers.yaml
  4. Create an IPAddressPool CR file on your workstation, for example, ipaddresspools-bgp.yaml.
  5. In the IPAddressPool CR file, configure an IPAddressPool resource on each isolated network to specify the IP address ranges over which MetalLB has authority:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: ctlplane
      namespace: metallb-system
    spec:
      addresses:
      - 192.168.125.80-192.168.125.90
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: internalapi
      namespace: metallb-system
    spec:
      addresses:
      - 172.17.0.80-172.17.0.90
    • spec.addresses: The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange.

    For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB BGP peers in RHOCP Networking overview.

  6. Create the IPAddressPool CR in the cluster:

    $ oc apply -f ipaddresspools-bgp.yaml
  7. Verify that the IPAddressPool CR is created:

    $ oc describe -n metallb-system IPAddressPool
  8. Create a BGPAdvertisement CR file on your workstation, for example, bgpadvert.yaml.

    apiVersion: metallb.io/v1beta1
    kind: BGPAdvertisement
    metadata:
      name: bgpadvertisement
      namespace: metallb-system
    spec:
      ipAddressPools:
      - ctlplane
      - internalapi
      - storage
      - tenant
      peers:
      - bgp-peer-node-0-0
      - bgp-peer-node-0-1
      - bgp-peer-node-1-0
      - bgp-peer-node-1-1
      - bgp-peer-node-2-0
      - bgp-peer-node-2-1
      ...
    • peers: Lists all the BGPPeer CRs you defined for the peer IP addresses that each RHOCP node needs to communicate with.
  9. Create the BGPAdvertisement CR in the cluster:

    $ oc apply -f bgpadvert.yaml
  10. If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.

    1. Check the network back end used by your cluster:

      $ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
    2. If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:

      $ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge

4.4. Creating the data plane network for BGP

To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. Each network definition must include the IP address assignment.

Tip

Use the following commands to view the NetConfig CRD definition and specification schema:

$ oc describe crd netconfig

$ oc explain netconfig.spec

Procedure

  1. Create a file named netconfig_bgp.yaml on your workstation.
  2. Add the following configuration to netconfig_bgp.yaml to create the NetConfig CR:

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: bgp-netconfig
      namespace: openstack
  3. In the netconfig_bgp.yaml file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks for BGP. The following example creates isolated networks for the data plane:

    Note

    The example provided in this step and in later steps include only IPv4 addresses. However, RHOSO also supports IPv6 addresses.

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: bgp-netconfig
      namespace: openstack
    spec:
      networks:
      - name: ctlplane
        dnsDomain: ctlplane.example.com
        serviceNetwork: ctlplane
        mtu: 1500
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 192.168.122.120
            start: 192.168.122.100
          - end: 192.168.122.200
            start: 192.168.122.150
          cidr: 192.168.122.0/24
          gateway: 192.168.122.1
        - name: subnet2
          allocationRanges:
          - end: 192.168.123.120
            start: 192.168.123.100
          - end: 192.168.123.200
            start: 192.168.123.150
          cidr: 192.168.123.0/24
          gateway: 192.168.123.1
        - name: subnet3
          allocationRanges:
          - end: 192.168.124.120
            start: 192.168.124.100
          - end: 192.168.124.200
            start: 192.168.124.150
          cidr: 192.168.124.0/24
          gateway: 192.168.124.1
      - name: tenant
        dnsDomain: tenant.example.com
        mtu: 1500
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.19.0.250
            start: 172.19.0.100
          cidr: 172.19.0.0/24
        vlan: 22
    • spec.networks.name: The name of the network, for example, ctlplane.
    • spec.networks.subnets: The IPv4 subnet specification.
    • spec.networks.subnets.name: The name of the subnet, for example, subnet1.
    • spec.networks.subnets.allocationRanges: The NetConfig allocationRange. The allocationRange must not overlap with the MetalLB IPAddressPool range and the IP address pool range.
    • spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks for BGP.
  4. In the netconfig_bgp.yaml file, define the network interfaces that establish connectivity within each zone. The following example defines two network interfaces, bgpnet0 for eth2 and bgpnet1 for `eth3, with a subnet for each zone:

      - name: bgpnet0
        dnsDomain: bgpnet0.example.com
        serviceNetwork: bgpnet0
        mtu: 1500
        subnets:
        - name: subnet0
          allocationRanges:
          - end: 100.64.0.36
            start: 100.64.0.1
          cidr: 100.64.0.0/24
          gateway: 100.64.0.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.64.0.1
        - name: subnet1
          allocationRanges:
          - end: 100.64.1.36
            start: 100.64.1.1
          cidr: 100.64.1.0/24
          gateway: 100.64.1.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.64.1.1
        - name: subnet2
          allocationRanges:
          - end: 100.64.2.36
            start: 100.64.2.1
          cidr: 100.64.2.0/24
          gateway: 100.64.2.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.64.2.1
      - name: bgpnet1
        dnsDomain: bgpnet1.example.com
        serviceNetwork: bgpnet1
        mtu: 1500
        subnets:
        - name: subnet0
          allocationRanges:
          - end: 100.65.0.36
            start: 100.65.0.1
          cidr: 100.65.0.0/24
          gateway: 100.65.0.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.65.0.1
        - name: subnet1
          allocationRanges:
          - end: 100.65.1.36
            start: 100.65.1.1
          cidr: 100.65.1.0/24
          gateway: 100.65.1.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.65.1.1
        - name: subnet2
          allocationRanges:
          - end: 100.65.2.36
            start: 100.65.2.1
          cidr: 100.65.2.0/24
          gateway: 100.65.2.1
          routes:
          - destination: 0.0.0.0/0
            nexthop: 100.65.2.1
    • name: bgpnet0: The network used by the data plane node to communicate with its BGP peer.
    • name: bgpnet1: The network used by the data plane node to communicate with its BGP peer.
  5. In the netconfig_bgp.yaml file, configure the IP address of the loopback interface, bgpmainnet, used by each node to communicate with each other:

      - name: bgpmainnet
        dnsDomain: bgpmainnet.example.com
        serviceNetwork: bgpmainnet
        mtu: 1500
        subnets:
        - name: subnet0
          allocationRanges:
          - end: 99.99.0.36
            start: 99.99.0.2
          cidr: 99.99.0.0/24
        - name: subnet1
          allocationRanges:
          - end: 99.99.1.36
            start: 99.99.1.2
          cidr: 99.99.1.0/24
        - name: subnet2
          allocationRanges:
          - end: 99.99.2.36
            start: 99.99.2.2
          cidr: 99.99.2.0/24
  6. Save the ` netconfig_bgp.yaml` definition file.
  7. Create the data plane network:

    $ oc create -f  netconfig_bgp.yaml -n openstack
  8. Create a BGPConfiguration CR file named bgpconfig.yml to announce the IP addresses of the pods over BGP:

    apiVersion: network.openstack.org/v1beta1
    kind: BGPConfiguration
    metadata:
      name: bgpconfiguration
      namespace: openstack
    spec: {}
  9. Create the BGPConfiguration CR to create the required FRR configurations for each pod:

    $ oc create -f bgpconfig.yml

Verification

  1. Verify that the data plane network is created:

    $ oc get netconfig/openstacknetconfig -n openstack

    If you see errors, check the underlying network-attach-definition and node network configuration policies:

    $ oc get network-attachment-definitions -n openstack
    $ oc get nncp

To prepare your Red Hat OpenStack Services on OpenShift (RHOSO) environment for Red Hat Ceph Storage or third-party storage in distributed zones, you must create Secret custom resources (CRs) to store the storage credentials for each zone.

You must create Secret custom resources (CRs) to store the Red Hat Ceph Storage cluster credentials for each zone in your deployment. The default zone, Zone 1, needs access to the Red Hat Ceph Storage clusters in all zones. The other zones need access to their local Red Hat Ceph Storage cluster and the Red Hat Ceph Storage cluster in the default zone. Create one Secret CR for each zone.

Procedure

  1. Create a Secret CR for the default zone, Zone 1, named ceph-conf-files-az1, that contains the Red Hat Ceph Storage cluster keyrings and conf files for all the zones:

    $ oc create secret generic ceph-conf-files-az1 \
    --from-file=az1.client.openstack.keyring \
    --from-file=az1.conf \
    --from-file=az2.client.openstack.keyring \
    --from-file=az2.conf \
    --from-file=az3.client.openstack.keyring \
    --from-file=az3.conf -n openstack
  2. Create a Secret CR for Zone 2 named ceph-conf-files-az2, that contains all the Red Hat Ceph Storage cluster keyrings and conf files for Zone 2 and the keyrings and conf files for the default zone:

    $ oc create secret generic ceph-conf-files-az2 \
    --from-file=az1.client.openstack.keyring \
    --from-file=az1.conf \
    --from-file=az2.client.openstack.keyring \
    --from-file=az2.conf -n openstack
  3. Create a Secret CR for Zone 3 named ceph-conf-files-az3, that contains all the Red Hat Ceph Storage cluster keyrings and conf files for Zone 3 and the keyrings and conf files for the default zone:

    $ oc create secret generic ceph-conf-files-az3 \
    --from-file=az1.client.openstack.keyring \
    --from-file=az1.conf \
    --from-file=az3.client.openstack.keyring \
    --from-file=az3.conf -n openstack

You must create Secret custom resources (CRs) to store the third-party storage credentials for each zone in your deployment.

Procedure

  1. Create a Secret CR to store the credentials to connect to the storage service. The following example is for NetApp iSCSI:

    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        component: cinder-volume
        service: cinder
      name: cinder-volume-ontap-secrets-az1
      namespace: openstack
    stringData:
      cinder-volume-ontap-secrets-az1: |
        [ontap-az1]
        netapp_login = <login>
        netapp_password = <password>
        netapp_vserver = <vserver>
        netapp_pool_name_search_pattern = <pool>
  2. Create a separate Secret CR for each zone:

    $ oc create -f cinder-volume-ontap-secrets-az1.yaml
    $ oc create -f cinder-volume-ontap-secrets-az2.yaml
    $ oc create -f cinder-volume-ontap-secrets-az3.yaml

You must create Secret custom resources (CRs) to store the third-party storage credentials for each zone in your deployment. For more information about configuring secrets for the Shared File Systems service (manila), see Creating the server connection secret in Configuring persistent storage.

Procedure

  1. Create a Secret CR to store the credentials to connect to the storage service. The following example is for NetApp NFS:

    apiVersion: v1
    kind: Secret
    metadata:
      name: osp-secret-manila-az1
      namespace: openstack
    stringData:
      netapp-secrets.conf: |
        [nfs_az1]
        netapp_server_hostname = <hostname>
        netapp_login = <login>
        netapp_password = <password>
        netapp_vserver = <vserver>
  2. Create a separate Secret CR for each zone:

    $ oc create -f osp-secret-manila-az1.yaml
    $ oc create -f osp-secret-manila-az2.yaml
    $ oc create -f osp-secret-manila-az3.yaml

Chapter 6. Creating the distributed control plane

The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.

To create a distributed control plane, you must complete the following tasks:

  1. Define the distributed zones by labelling each of the RHOCP nodes with zone names.
  2. Create a Topology custom resource (CR) for each zone in the openstack namespace.
  3. Create a Topology CR that spreads pods across the zones.
  4. Create the distributed control plane with services spread across the zones.

6.1. Prerequisites

  • The OpenStack Operator (openstack-operator) is installed. For more information, see Installing and preparing the Operators.
  • The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing networks for Red Hat OpenStack Services on OpenShift with distributed zones.
  • The RHOCP cluster is not configured with any network policies that prevent communication between the openstack-operators namespace and the control plane namespace (default openstack). Use the following command to check the existing network policies on the cluster:

    $ oc get networkpolicy -n openstack

    This command returns the message "No resources found in openstack namespace" when there are no network policies. If this command returns a list of network policies, then check that they do not prevent communication between the openstack-operators namespace and the control plane namespace. For more information about network policies, see Network security in the RHOCP Networking guide.

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

6.2. Defining the distributed zones

Define the distributed zones by labelling each of the Red Hat OpenShift Container Platform (RHOCP) nodes with zone names.

Procedure

  1. Add a label to the Node object of the RHOCP node that identifies the zone in which the node is located:

    $ oc label nodes <node_name> topology.kubernetes.io/zone=<zone_name> --overwrite
    • Replace <node_name> with the name of the node, for example, worker-0.
    • Replace <zone_name> with the name of the zone, for example, zone1.
  2. Repeat step 1 for each RHOCP node in your RHOSO environment.
  3. Optional: Review the node zone assignments:

    $ oc get nodes -o json | jq -r '.items[].metadata.labels'

6.3. Creating Topology CRs for a specific zone

Create a Topology custom resource (CR) for each zone in the openstack namespace. You can schedule pods to run in one of these zones. The following procedure creates an example Topology CR. Repeat the procedure to create a Topology CR for every zone in your environment.

Procedure

  1. Create a file on your workstation that defines a Topology CR for a zone, for example, topology_zone1.yaml:

    apiVersion: topology.openstack.org/v1beta1
    kind: Topology
    metadata:
      name: <topology_name>
      namespace: openstack
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: topology.kubernetes.io/zone
                    operator: In
                    values:
                      - <zone_name>
    • Replace <topology_name> with the name for this Topology CR, for example, zone1-node-affinity. The name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character.
    • Replace <zone_name> with the name of the zone that is associated with this Topology CR, for example, zone1.
  2. If you have multiple host nodes in each zone, you can configure host anti-affinity to spread the service pods across hosts within the same zone:

    spec:
      ...
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        matchLabelKeys:
          - pod-template-hash
          - controller-revision-hash
    • whenUnsatisfiable: Specifies how the scheduler handles a pod if it does not satisfy the spread constraint:

      • DoNotSchedule: Instructs the scheduler not to schedule the pod. This is the default behavior. To ensure the deployment has high availability (HA), set the HA services rabbitmq and galera to DoNotSchedule.
      • ScheduleAnyway: Instructs the scheduler to schedule the pod in any location, but to give higher precedence to topologies that minimize the skew. If you set HA services ScheduleAnyway, then when the spread constraint cannot be satisfied, the pod is placed in a different zone. You must then move the pod manually to the correct zone once the zone is operational. For more information about how to manually move pods, see Controlling pod placement onto nodes (scheduling) in RHOCP Nodes.
    • matchLabelKeys: Specifies the label keys to use to group the pods that the affinity rules are applied to, to ensure that the affinity rules are applied only to pods from the same statefulset or deployment resource when scheduling. The matchLabelKeys field enables the resource to be updated with new pods and the spread constraint rules to be applied to only the new set of pods.
  3. Create the Topology CR:

    $ oc create -f topology_zone1.yaml

Create a Topology custom resource (CR) that spreads the control plane service pods across the zones. The following procedure creates an example Topology CR. For information about how to control pod placement, see Controlling pod placement onto nodes (scheduling) in the RHOCP Nodes guide.

Procedure

  1. Create a file on your workstation that defines a Topology CR that spreads the control plane service pods across the zones, for example, spread_pods.yaml:

    apiVersion: topology.openstack.org/v1beta1
    kind: Topology
    metadata:
      name: spread-pods
      namespace: openstack
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: DoNotSchedule
        matchLabelKeys:
          - pod-template-hash
          - controller-revision-hash
    • whenUnsatisfiable: Specifies how to deal with a pod if it does not satisfy the spread constraint:

      • DoNotSchedule: Instructs the scheduler not to schedule the pod. This is the default behavior. To ensure the deployment has high availability (HA), set the HA services rabbitmq and galera to DoNotSchedule.
      • ScheduleAnyway: Instructs the scheduler to schedule the pod in any location, but to give higher precedence to topologies that minimize the skew. If you set HA services to ScheduleAnyway, then when the spread constraint cannot be satisfied, the pod is placed in a different zone. You must then move the pod manually to the correct zone once the zone is operational. For more information about how to manually move pods, see Controlling pod placement onto nodes (scheduling) in RHOCP Nodes.
    • matchLabelKeys: Specifies the label keys to use to group the pods that the affinity rules are applied to. Use this field to ensure that the affinity rules are applied only to pods from the same statefulset or deployment resource when scheduling. The matchLabelKeys field enables the resource to be updated with new pods and the spread constraint rules to be applied to only the new set of pods.
  2. If you have multiple host nodes in each zone, you can configure host anti-affinity to spread the service pods across hosts within the same zone:

    spec:
      topologySpreadConstraints:
      ...
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        matchLabelKeys:
          - pod-template-hash
          - controller-revision-hash
  3. Create the Topology CR:

    $ oc create -f spread_pods.yaml

6.5. Creating the distributed control plane

Define an OpenStackControlPlane custom resource (CR) to perform the following tasks:

  • Create the distributed control plane.
  • Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.

The following procedure creates a control plane with the service pods spread across all zones by default. The storage services override the default service pod placement to schedule the cinderVolumes, cinderBackup, glanceAPIs, and manilaShares service pods in specific zones. You can use the control plane to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

Note
  • The following service examples use IP addresses from the default RHOSO MetalLB IPAddressPool range for the loadBalancerIPs field. Update the loadBalancerIPs field with the IP address from the MetalLB IPAddressPool range that you created in Preparing RHOCP for RHOSO network VIPS for BGP.
  • You can place the pods for each service in a specific zone by using the topologyRef field. If not specified, the pods are automatically distributed evenly across the zones.

Prerequisites

Procedure

  1. Create a file on your workstation named distributed_control_plane.yaml to define the OpenStackControlPlane CR:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: distributed-control-plane
      namespace: openstack
  2. Specify that the service pods, when created, are spread across the zones in your distributed zone environment:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: distributed-control-plane
      namespace: openstack
    spec:
      topologyRef:
        name: spread-pods
  3. Specify the Secret CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:

    spec:
      ...
      secret: osp-secret
  4. Specify the storageClass you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:

    spec:
      ...
      storageClass: <RHOCP_storage_class>
    • Replace <RHOCP_storage_class> with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
  5. If you are using Red Hat Ceph Storage, define the zones and services that require access to the Red Hat Ceph Storage secret:

     extraMounts:
      - extraVol:
        - extraVolType: Ceph
          mounts:
          - mountPath: /etc/ceph
            name: ceph-az1
            readOnly: true
          propagation:
          - az1
          - CinderBackup
          volumes:
          - name: ceph-az1
            projected:
              sources:
              - secret:
                  name: ceph-conf-files-az1
        - extraVolType: Ceph
          mounts:
          - mountPath: /etc/ceph
            name: ceph-az2
            readOnly: true
          propagation:
          - az2
          volumes:
          - name: ceph-az2
            projected:
              sources:
              - secret:
                  name: ceph-conf-files-az2
        - extraVolType: Ceph
          mounts:
          - mountPath: /etc/ceph
            name: ceph-az3
            readOnly: true
          propagation:
          - az3
          volumes:
          - name: ceph-az3
            projected:
              sources:
              - secret:
                  name: ceph-conf-files-az3
  6. Configure the Block Storage service (cinder).

    • If you are using the Block Storage service with Red Hat Ceph Storage, add the following configuration:

        cinder:
          template:
            customServiceConfig: |
              [DEFAULT]
              storage_availability_zone = az1,az2,az3
            databaseInstance: openstack
            secret: osp-secret
            cinderAPI:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            cinderScheduler:
              replicas: 1
            cinderBackup:
              customServiceConfig: |
                [DEFAULT]
                backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
                backup_ceph_conf = /etc/ceph/az1.conf
                backup_ceph_pool = backups
                backup_ceph_user = openstack
              networkAttachments:
              - storage
              replicas: 1
              topologyRef:
                name: zone1-node-affinity
            cinderVolumes:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = ceph
                  glance_api_servers = https://glance-az1-internal.openstack.svc:9292
                  [ceph]
                  volume_backend_name = ceph
                  volume_driver = cinder.volume.drivers.rbd.RBDDriver
                  rbd_ceph_conf = /etc/ceph/az1.conf
                  rbd_user = openstack
                  rbd_pool = volumes
                  rbd_flatten_volume_from_snapshot = False
                  rbd_secret_uuid = 7fd06e2a-a3e4-49fa-980c-1ed0865d2886
                  rbd_cluster_name = az1
                  backend_availability_zone = az1
                topologyRef:
                  name: zone1-node-affinity
                networkAttachments:
                - storage
                replicas: 1
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = ceph
                  glance_api_servers = https://glance-az2-internal.openstack.svc:9292
                  [ceph]
                  volume_backend_name = ceph
                  volume_driver = cinder.volume.drivers.rbd.RBDDriver
                  rbd_ceph_conf = /etc/ceph/az2.conf
                  rbd_user = openstack
                  rbd_pool = volumes
                  rbd_flatten_volume_from_snapshot = False
                  rbd_secret_uuid = d4596afb-43f2-4e5d-a204-bc38a3485dc3
                  rbd_cluster_name = az2
                  backend_availability_zone = az2
                topologyRef:
                  name: zone2-node-affinity
                networkAttachments:
                - storage
                replicas: 1
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = ceph
                  glance_api_servers = https://glance-az3-internal.openstack.svc:9292
                  [ceph]
                  volume_backend_name = ceph
                  volume_driver = cinder.volume.drivers.rbd.RBDDriver
                  rbd_ceph_conf = /etc/ceph/az3.conf
                  rbd_user = openstack
                  rbd_pool = volumes
                  rbd_flatten_volume_from_snapshot = False
                  rbd_secret_uuid = 1c348616-c493-4369-91f2-a55e4f404fbe
                  rbd_cluster_name = az3
                  backend_availability_zone = az3
                topologyRef:
                  name: zone3-node-affinity
                networkAttachments:
                - storage
                replicas: 1
    • If you are using the Block Storage service with third-party storage, add the following configuration. In this example, you configure cinderVolumes in three availability zones (AZs) with NetApp:

      cinder:
        apiOverride:
          route:
            haproxy.router.openshift.io/timeout: 60s
        template:
          customServiceConfig: |
            [DEFAULT]
            storage_availability_zone = az1
          cinderAPI:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            replicas: 3
          cinderBackup:
            customServiceConfig: |
              [DEFAULT]
              # TODO:
            networkAttachments:
            - storage
            replicas: 0
            topologyRef:
              name: azone-node-affinity
          cinderScheduler:
            replicas: 1
          cinderVolumes:
            ontap-iscsi-az1:
              customServiceConfig: |
                [DEFAULT]
                glance_api_servers = https://glance-az1-internal.openstack.svc:9292
                [ontap-az1]
                backend_availability_zone = az1
                volume_backend_name=ontap-az1
                volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
                netapp_server_hostname=10.0.0.5
                netapp_server_port=80
                netapp_storage_protocol=iscsi
                netapp_storage_family=ontap_cluster
                consistencygroup_support=True
              customServiceConfigSecrets:
              - cinder-volume-secrets-az1
              topologyRef:
                name: azone-node-affinity
            ontap-iscsi-az2:
              customServiceConfig: |
                [DEFAULT]
                glance_api_servers = https://glance-az2-internal.openstack.svc:9292
                [ontap-az2]
                backend_availability_zone = az2
                volume_backend_name=ontap-az2
                volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
                netapp_server_hostname=10.0.0.6
                netapp_server_port=80
                netapp_storage_protocol=iscsi
                netapp_storage_family=ontap_cluster
                consistencygroup_support=True
              customServiceConfigSecrets:
              - cinder-volume-secrets-az2
              topologyRef:
                name: bzone-node-affinity
            ontap-iscsi-az3:
              customServiceConfig: |
                [DEFAULT]
                glance_api_servers = https://glance-az3-internal.openstack.svc:9292
                [ontap-az3]
                backend_availability_zone = az3
                volume_backend_name=ontap-az3
                volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
                netapp_server_hostname=10.0.0.7
                netapp_server_port=80
                netapp_storage_protocol=iscsi
                netapp_storage_family=ontap_cluster
                consistencygroup_support=True
              customServiceConfigSecrets:
              - cinder-volume-secrets-az3
              topologyRef:
                name: czone-node-affinity
          databaseInstance: openstack
          preserveJobs: false
          secret: osp-secret
    • In this example, the Block Storage backup service under cinderBackup is disabled. If you require the Block Storage backup service:

      • Increase the replica count.
      • Set the topologyRef to run the backup service in the relevant AZ.
      • Set the customServiceConfig for the appropriate back end as described in Storage back ends for backups in Configuring persistent storage.
  7. Configure the Compute service (nova):

      nova:
        template:
          apiServiceTemplate:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
          metadataServiceTemplate:
            replicas: 3
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          schedulerServiceTemplate:
            replicas: 3
          cellTemplates:
            cell0:
              cellDatabaseAccount: nova-cell0
              cellDatabaseInstance: openstack
              cellMessageBusInstance: rabbitmq
              hasAPIAccess: true
            cell1:
              cellDatabaseAccount: nova-cell1
              cellDatabaseInstance: openstack-cell1
              cellMessageBusInstance: rabbitmq-cell1
              conductorServiceTemplate:
                replicas: 1
              noVNCProxyServiceTemplate:
                enabled: true
                networkAttachments:
                - ctlplane
              hasAPIAccess: true
          secret: osp-secret
  8. Configure the DNS service for the data plane:

      dns:
        template:
          options:
          - key: server
            values:
            - <IP address for DNS server reachable from dnsmasq pod>
          override:
            service:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: ctlplane
                  metallb.universe.tf/allow-shared-ip: ctlplane
                  metallb.universe.tf/loadBalancerIPs: 192.168.122.80
              spec:
                type: LoadBalancer
          replicas: 2
    • options: Defines the dnsmasq instances required for each DNS server by using key-value pairs. In this example, there is one key-value pair defined because there is only one DNS server configured to forward requests to.
    • key: Specifies the dnsmasq parameter to customize for the deployed dnsmasq instance. Set to one of the following valid values:

      • server
      • rev-server
      • srv-host
      • txt-record
      • ptr-record
      • rebind-domain-ok
      • naptr-record
      • cname
      • host-record
      • caa-record
      • dns-rr
      • auth-zone
      • synth-domain
      • no-negcache
      • local
    • values: Specifies the value for the DNS server reachable from the dnsmasq pod on the RHOCP cluster network. You can specify a generic DNS server as the value, for example, 1.1.1.1, or a DNS server for a specific domain, for example, /google.com/8.8.8.8.

      Note

      This DNS service, dnsmasq, provides DNS services for nodes on the RHOSO data plane. dnsmasq is different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.

  9. Configure the Identity service (keystone):

      keystone:
        template:
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          databaseInstance: openstack
          secret: osp-secret
          replicas: 3
  10. Configure the Image service (glance).

    • If you are using the Image service with Red Hat Ceph Storage, configure the following:

        glance:
          template:
            databaseInstance: openstack
            glanceAPIs:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  enabled_backends = az1:rbd
                  [glance_store]
                  default_backend = az1
                  [az1]
                  rbd_store_ceph_conf = /etc/ceph/az1.conf
                  store_description = "az1 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.81
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: zone1-node-affinity
                type: edge
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  enabled_backends = az1:rbd,az2:rbd
                  [glance_store]
                  default_backend = az2
                  [az2]
                  rbd_store_ceph_conf = /etc/ceph/az2.conf
                  store_description = "az2 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                  [az1]
                  rbd_store_ceph_conf = /etc/ceph/az1.conf
                  store_description = "az1 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.82
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: zone2-node-affinity
                type: edge
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  enabled_backends = az1:rbd,az3:rbd
                  [glance_store]
                  default_backend = az3
                  [az3]
                  rbd_store_ceph_conf = /etc/ceph/az3.conf
                  store_description = "az3 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                  [az1]
                  rbd_store_ceph_conf = /etc/ceph/az1.conf
                  store_description = "az1 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.83
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: zone3-node-affinity
                type: edge
              default:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  enabled_backends = az1:rbd,az2:rbd,az3:rbd
                  [glance_store]
                  default_backend = az1
                  [az1]
                  rbd_store_ceph_conf = /etc/ceph/az1.conf
                  store_description = "az1 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                  [az2]
                  rbd_store_ceph_conf = /etc/ceph/az2.conf
                  store_description = "az2 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                  [az3]
                  rbd_store_ceph_conf = /etc/ceph/az3.conf
                  store_description = "az3 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                      spec:
                        type: LoadBalancer
                replicas: 3
                type: split
            keystoneEndpoint: default
            storage:
              storageClass: local-storage
              storageRequest: 10G
          uniquePodNames: true
    • If you are using the Image service with third-party storage, configure the following. In this example, you have three AZs and you are using the Block Storage service as a back end for the Image service. You create a separate glanceAPI server set for each AZ, which uses multistore to write to more than one Block Storage service back end. If you configure multistore to use the Block Storage service as its back end, refer to step 25 when you complete the procedure to create the distributed control plane because you must create volume types for cinder-volume services when the control plane is created.

      This example uses Block Storage service iSCSI to store Image service images, which can support multipath, and multipath options such as cinder_use_multipath and cinder_do_extend_attached are enabled. If the storage protocol you are using or your environment does not support multipath, then do not use these options.

            glanceAPIs:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = az1:cinder
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  [glance_store]
                  default_backend = az1
                  [az1]
                  store_description = AZ1 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az1
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.81
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: azone-node-affinity
                type: edge
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = az1:cinder,az2:cinder
                  [glance_store]
                  default_backend = az2
                  [az2]
                  store_description = AZ2 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az2
                  [az1]
                  store_description = AZ1 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az1
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.82
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: bzone-node-affinity
                type: edge
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = az1:cinder,az3:cinder
                  [glance_store]
                  default_backend = az3
                  [az3]
                  store_description = AZ3 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az3
                  [az1]
                  store_description = AZ1 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az1
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.83
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: czone-node-affinity
                type: edge
              default:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = az1:cinder,az2:cinder,az3:cinder
                  [glance_store]
                  default_backend = az1
                  [az1]
                  store_description = AZ1 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az1
                  [az2]
                  store_description = AZ2 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az2
                  [az3]
                  store_description = AZ3 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az3
  11. Configure the Key Management service (barbican):

      barbican:
        template:
          databaseInstance: openstack
          secret: osp-secret
          barbicanAPI:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
          barbicanWorker:
            replicas: 3
          barbicanKeystoneListener:
            replicas: 1
  12. Configure the Networking service (neutron):

      neutron:
        template:
          customServiceConfig: |
            [DEFAULT]
            vlan_transparent = true
            debug = true
            [ovs]
            igmp_snooping_enable = true
          databaseInstance: openstack
          networkAttachments:
          - internalapi
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          replicas: 3
          secret: osp-secret
  13. Ensure that the Object Storage service (swift) is disabled:

      swift:
        enabled: false
  14. Configure OVN:

      ovn:
        template:
          ovnController:
            external-ids:
              enable-chassis-as-gateway: false
            networkAttachment: tenant
            nicMappings:
              datacentre: ospbr
          ovnDBCluster:
            ovndbcluster-nb:
              dbType: NB
              networkAttachment: internalapi
              replicas: 3
              storageRequest: 10G
            ovndbcluster-sb:
              dbType: SB
              networkAttachment: internalapi
              replicas: 3
              storageRequest: 10G
          ovnNorthd:
            networkAttachment: internalapi
  15. Configure the Placement service (placement):

      placement:
        template:
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          databaseInstance: openstack
          replicas: 3
          secret: osp-secret
  16. Configure the Telemetry service (ceilometer, prometheus):

      telemetry:
        enabled: true
        template:
          metricStorage:
            enabled: true
            dashboardsEnabled: true
            dataplaneNetwork: ctlplane
            networkAttachments:
              - ctlplane
            monitoringStack:
              alertingEnabled: true
              scrapeInterval: 30s
              storage:
                strategy: persistent
                retention: 24h
                persistent:
                  pvcStorageRequest: 20G
          autoscaling:
            enabled: false
            aodh:
              databaseAccount: aodh
              databaseInstance: openstack
              passwordSelector:
                aodhService: AodhPassword
              rabbitMqClusterName: rabbitmq
              serviceUser: aodh
              secret: osp-secret
            heatInstance: heat
          ceilometer:
            template:
              passwordSelector:
                service: CeilometerPassword
              secret: osp-secret
              serviceUser: ceilometer
          logging:
            enabled: false
    • metricStorage.dataplaneNetwork: Defines the network that you use to scrape dataplane node_exporter endpoints.
    • metricStorage.networkAttachments: Lists the networks that each service pod is attached to. Specify by using the NetworkAttachmentDefinition resource names. You configure a NIC for the service for each network attachment that you specify. If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. You must create a networkAttachment that matches the network that you specify as the dataplaneNetwork, so that Prometheus can scrape data from the dataplane nodes.
    • autoscaling: You must have the autoscaling field present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
  17. Configure the Shared File Systems service (manila).

    • If you are using the Shared File Systems service with Red Hat Ceph Storage, add the following configuration:

        manila:
          enabled: true
          template:
            manilaAPI:
              customServiceConfig: |
                [DEFAULT]
                enabled_share_protocols=nfs,cephfs
              networkAttachments:
              - internalapi
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              replicas: 3
            manilaScheduler:
              replicas: 3
            manilaShares:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = cephfs_az1
                  enabled_share_protocols = cephfs
                  [cephfs_az1]
                  driver_handles_share_servers = False
                  share_backend_name = cephfs_az1
                  share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
                  cephfs_conf_path = /etc/ceph/az1.conf
                  cephfs_cluster_name = az1
                  cephfs_auth_id=openstack
                  cephfs_volume_mode = 0755
                  cephfs_protocol_helper_type = CEPHFS
                  backend_availability_zone = az1
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: zone1-node-affinity
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = cephfs_az2
                  enabled_share_protocols = cephfs
                  [cephfs_az2]
                  driver_handles_share_servers = False
                  share_backend_name = cephfs_az2
                  share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
                  cephfs_conf_path = /etc/ceph/az2.conf
                  cephfs_cluster_name = az2
                  cephfs_auth_id=openstack
                  cephfs_volume_mode = 0755
                  cephfs_protocol_helper_type = CEPHFS
                  backend_availability_zone = az2
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: zone2-node-affinity
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = cephfs_az3
                  enabled_share_protocols = cephfs
                  [cephfs_az3]
                  driver_handles_share_servers = False
                  share_backend_name = cephfs_az3
                  share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
                  cephfs_conf_path = /etc/ceph/az3.conf
                  cephfs_cluster_name = az3
                  cephfs_auth_id=openstack
                  cephfs_volume_mode = 0755
                  cephfs_protocol_helper_type = CEPHFS
                  backend_availability_zone = az3
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: zone3-node-affinity
    • If you are using the Shared File Systems service with third-party storage, configure the following. In this example, you configure the Shared File Systems service in three AZs with NetApp.

      In the following example, driver_handles_share_servers is set to True. If you do not have this option, then set the driver_handles_share_servers field to False. For more information, see Verifying the distributed control plane and Configuring the Shared File Systems service (manila) in Configuring persistent storage.

        manila:
          apiOverride:
            route:
              haproxy.router.openshift.io/timeout: 60s
          enabled: true
          template:
            manilaAPI:
              customServiceConfig: |
                [DEFAULT]
                storage_availability_zone = az1,az2,az3
                default_share_type = nfs-multiaz
                enabled_share_protocols=nfs
                debug = true
              networkAttachments:
              - internalapi
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              replicas: 3
            manilaScheduler:
              replicas: 3
            manilaShares:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = nfs_az1
                  enabled_share_protocols = nfs
                  [nfs_az1]
                  driver_handles_share_servers = True
                  share_backend_name = nfs_az
                  backend_availability_zone = az1
                  share_driver=manila.share.drivers.netapp.common.NetAppDriver
                  netapp_storage_family=ontap_cluster
                  netapp_transport_type=http
                customServiceConfigSecrets:
                - osp-secret-manila-az1
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: azone-node-affinity
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = nfs_az2
                  enabled_share_protocols = nfs
                  [nfs_az2]
                  driver_handles_share_servers = True
                  share_backend_name = nfs_az
                  backend_availability_zone = az2
                  share_driver=manila.share.drivers.netapp.common.NetAppDriver
                  netapp_storage_family=ontap_cluster
                  netapp_transport_type=http
                customServiceConfigSecrets:
                - osp-secret-manila-az2
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: bzone-node-affinity
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = nfs_az3
                  enabled_share_protocols = nfs
                  [nfs_az3]
                  driver_handles_share_servers = True
                  share_backend_name = nfs_az
                  backend_availability_zone = az3
                  share_driver=manila.share.drivers.netapp.common.NetAppDriver
                  netapp_storage_family=ontap_cluster
                  netapp_transport_type=http
                customServiceConfigSecrets:
                - osp-secret-manila-az3
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: czone-node-affinity
            preserveJobs: false
  18. Disable the Load-balancing service (octavia) because it is not currently supported when enable-chassis-as-gateway is disabled in the OVN service. For more information, see OSPRH-10766.

      octavia:
        enabled: false
  19. Configure the Orchestration service (heat):

      heat:
        cnfAPIOverride:
          route: {}
        enabled: true
        template:
          databaseInstance: openstack
          heatAPI:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            replicas: 3
          heatEngine:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            replicas: 3
          secret: osp-secret
  20. Configure the Dashboard service (horizon):

      horizon:
        enabled: true
        template:
          replicas: 2
          secret: osp-secret
  21. Add the following service configurations to implement high availability (HA):

    • A MariaDB Galera cluster for use by all RHOSO services (openstack), and a MariaDB Galera cluster for use by the Compute service for cell1 (openstack-cell1):

        galera:
          enabled: true
          templates:
            openstack:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
            openstack-cell1:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
    • A single memcached cluster that contains three memcached servers:

        memcached:
          templates:
            memcached:
              replicas: 3
    • A RabbitMQ cluster for use by all RHOSO services (rabbitmq), and a RabbitMQ cluster for use by the Compute service for cell1 (rabbitmq-cell1):

        rabbitmq:
          templates:
            rabbitmq:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.85
                  spec:
                    type: LoadBalancer
            rabbitmq-cell1:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.86
                  spec:
                    type: LoadBalancer
      Note

      You cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.

  22. Create the control plane:

    $ oc create -f distributed_control_plane.yaml -n openstack
  23. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    distributed-control-plane 	Unknown 	Setup started

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

    Note

    Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.

    $ oc rsh -n openstack openstackclient
  24. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack

    The control plane is deployed when all the pods are either completed or running.

  25. If you are using the Image service with third-party storage, and you configure multistore to use the Block Storage service as its back end, each glance store specifies a unique cinder_volume_type. You create the volume types with the --private option to prevent them from being exposed to cloud users. To allow the Image service to access these volume types, you set the --project for each type to the service project ID.

    1. Open a remote shell connection to the OpenStackClient pod:

      $ oc rsh -n openstack openstackclient
    2. Identify the service project ID:

      $ sh-5.1$ openstack project list
      +----------------------------------+---------+
      | ID                               | Name    |
      +----------------------------------+---------+
      | 439f0ee839144b4c8640b9153a596a30 | admin   |
      | 7a8946c6ec7c4d0488c592f0306eaa35 | service |
      +----------------------------------+---------+
      $ sh-5.1$
    3. Store the service user project ID for the Image service in a shell variable:

      $ SERVICE_PROJECT_ID=$(openstack project show service -c id -f value)
    4. Create a type for each of the three cinder-volume services in each AZ. Set its project ID and availability zone. Use the spec key RESKEY:availability_zones to pass the AZ to the Block Storage service even though the Image service only passes the type when it requests a volume:

      $ openstack volume type create --private --project "$SERVICE_PROJECT_ID" --property "RESKEY:availability_zones=az1" glance-ontap-az1
      $ openstack volume type create --private --project "$SERVICE_PROJECT_ID" --property "RESKEY:availability_zones=az2" glance-ontap-az2
      $ openstack volume type create --private --project "$SERVICE_PROJECT_ID" --property "RESKEY:availability_zones=az3" glance-ontap-az3
    5. Exit the openstackclient pod:

      $ exit

6.6. Verifying the distributed control plane

Verify the creation of the distributed control plane.

Procedure

  1. Verify that the service pods are running in the correct AZs based on their worker node:

    $ oc get pods -o wide -l service=<service_name>
    • Replace <service_name> with cinder, glance, or manila to verify the service pods for the specific service.

      For example, if the worker-6 node is located in AZ3, then the output from the following command confirms that the cinder-dc29b-volume-az3-0 pod is running correctly in AZ3:

      $ oc get pods -o wide -l service=cinder
      NAME                         	READY   STATUS  	RESTARTS    	AGE 	IP           	NODE   	NOMINATED NODE   READINESS GATES
      cinder-dc29b-api-0           	2/2 	Running 	0           	4m24s     192.172.41.120   worker-7   <none>       	<none>
      cinder-dc29b-api-1           	2/2 	Running 	0           	4m48s     192.172.28.60    worker-5   <none>       	<none>
      cinder-dc29b-api-2           	2/2 	Running 	0           	5m5s      192.172.24.237   worker-0   <none>       	<none>
      cinder-dc29b-backup-0        	2/2 	Running 	0           	5m9s      192.172.24.234   worker-0   <none>       	<none>
      cinder-dc29b-scheduler-0     	2/2 	Running 	0           	5m5s	192.172.41.118   worker-7   <none>       	<none>
      cinder-dc29b-volume-az1-0    	2/2 	Running 	0           	5m9s	192.172.32.150   worker-2   <none>       	<none>
      cinder-dc29b-volume-az2-0    	2/2 	Running 	0           	5m9s	192.172.16.33    worker-3   <none>       	<none>
      cinder-dc29b-volume-az3-0    	2/2 	Running 	0           	5m8s	192.172.21.170   worker-6   <none>       	<none>
  2. Open a remote shell connection to the openstackclient pod:

    $ oc rsh -n openstack openstackclient
  3. Confirm that the internal service endpoints are registered with each service:

    $ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance
    +--------------+-----------+---------------------------------------------------------------+
    | Service Name | Interface | URL                                                           |
    +--------------+-----------+---------------------------------------------------------------+
    | glance       | internal  | https://glance-internal.openstack.svc                     |
    | glance       | public    | https://glance-default-public-openstack.apps.ostest.test.metalkube.org |
    +--------------+-----------+---------------------------------------------------------------+
  4. If you are using the Block Storage service (cinder) with Red Hat Ceph Storage, confirm the cinder-volume service is running in each availability zone (AZ):

    $ openstack volume service list
    +------------------+--------------------------------+------+---------+-------+----------------------------+
    | Binary       	| Host                       	| Zone | Status  | State | Updated At             	|
    +------------------+--------------------------------+------+---------+-------+----------------------------+
    | cinder-scheduler | cinder-dc29b-scheduler-0   	| az1  | enabled | up	| 2025-05-21T14:07:11.000000 |
    | cinder-backup	| cinder-dc29b-backup-0      	| az1  | enabled | up	| 2025-05-21T14:07:18.000000 |
    | cinder-volume	| cinder-dc29b-volume-az1-0@ceph | az1  | enabled | up	| 2025-05-21T14:07:13.000000 |
    | cinder-volume	| cinder-dc29b-volume-az2-0@ceph | az2  | enabled | up	| 2025-05-21T14:07:16.000000 |
    | cinder-volume	| cinder-dc29b-volume-az3-0@ceph | az3  | enabled | up	| 2025-05-21T14:07:15.000000 |
    +------------------+--------------------------------+------+---------+-------+----------------------------+
  5. If you are using the Block Storage service with third-party storage, verify the following.

    1. Confirm you can create volumes by AZ:

      $ openstack volume create --size 1 --availability-zone az1 vol_az1
      $ openstack volume create --size 1 --availability-zone az2 vol_az2
      $ openstack volume create --size 1 --availability-zone az3 vol_az3
    2. As an administrator, confirm you can create a volume in a specific AZ by passing only the type:

      $ openstack volume create --size 1 --type glance-ontap-az1 vol_az1_by_type
      $ openstack volume create --size 1 --type glance-ontap-az2 vol_az2_by_type
      $ openstack volume create --size 1 --type glance-ontap-az3 vol_az3_by_type
    3. Verify that volumes with each glance type were created by confirming they are available, and then you can delete them.
  6. If you are using the Image service (glance) with third-party storage, verify the following to test image creation per AZ.

    1. Observe the available stores:

      $ glance stores-info
      +----------+----------------------------------------------------------------------------------+
      | Property | Value                                                                            |
      +----------+----------------------------------------------------------------------------------+
      | stores   | [{"id": "az1", "description": "AZ1 NetApp iscsi cinder backend", "default":      |
      |          | "true"}, {"id": "az2", "description": "AZ2 NetApp iscsi cinder backend"}, {"id": |
      |          | "az3", "description": "AZ3 NetApp iscsi cinder backend"}]                        |
      +----------+----------------------------------------------------------------------------------+
    2. Create an image without passing any store-related parameters to check the result:

      sh-5.1$ openstack image create --disk-format qcow2 --container-format bare --public --file ./cirros-0.5.2-x86_64-disk.img cirros-default
      +------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
      | Field            | Value                                                                                                                                              |
      +------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
      | container_format | bare                                                                                                                                               |
      | created_at       | 2025-06-18T22:55:21Z                                                                                                                               |
      | disk_format      | qcow2                                                                                                                                              |
      | file             | /v2/images/10fcfc90-e178-4776-ac76-853a7082844b/file                                                                                               |
      | id               | 10fcfc90-e178-4776-ac76-853a7082844b                                                                                                               |
      | min_disk         | 0                                                                                                                                                  |
      | min_ram          | 0                                                                                                                                                  |
      | name             | cirros-default                                                                                                                                     |
      | owner            | 439f0ee839144b4c8640b9153a596a30                                                                                                                   |
      | properties       | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/cirros-default', owner_specified.openstack.sha256='' |
      | protected        | False                                                                                                                                              |
      | schema           | /v2/schemas/image                                                                                                                                  |
      | status           | queued                                                                                                                                             |
      | tags             |                                                                                                                                                    |
      | updated_at       | 2025-06-18T22:55:21Z                                                                                                                               |
      | visibility       | public                                                                                                                                             |
      +------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
      sh-5.1$
      • The image is created in az1 because it is the default AZ:

        sh-5.1$ openstack image show 10fcfc90-e178-4776-ac76-853a7082844b | grep stores
        | properties       | os_hash_algo='sha512', os_hash_value='6b813aa46bb90b4da216a4d19376593fa3f4fc7e617f03a92b7fe11e9a3981cbe8f0959dbebe36225e5f53dc4492341a4863cac4ed1ee0909f3fc78ef9c3e869', os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/cirros-default', owner_specified.openstack.sha256='', stores='az1' |
        sh-5.1$
    3. Create an image and pass a parameter so that it goes directly into store az1:

      $ glance image-create --disk-format raw --container-format bare --name cirros-test --file cirros-0.5.2-x86_64-disk.img --store az1
      +------------------+----------------------------------------------------------------------------------+
      | Property         | Value                                                                            |
      +------------------+----------------------------------------------------------------------------------+
      | checksum         | b874c39491a2377b8490f5f1e89761a4                                                 |
      | container_format | bare                                                                             |
      | created_at       | 2025-06-18T22:47:10Z                                                             |
      | disk_format      | raw                                                                              |
      | id               | 5073f3f0-e4ac-4cca-9883-fecade29a1f3                                             |
      | min_disk         | 0                                                                                |
      | min_ram          | 0                                                                                |
      | name             | cirros-test                                                                      |
      | os_hash_algo     | sha512                                                                           |
      | os_hash_value    | 6b813aa46bb90b4da216a4d19376593fa3f4fc7e617f03a92b7fe11e9a3981cbe8f0959dbebe3622 |
      |                  | 5e5f53dc4492341a4863cac4ed1ee0909f3fc78ef9c3e869                                 |
      | os_hidden        | False                                                                            |
      | owner            | 439f0ee839144b4c8640b9153a596a30                                                 |
      | protected        | False                                                                            |
      | size             | 16300544                                                                         |
      | status           | active                                                                           |
      | stores           | az1                                                                              |
      | tags             | []                                                                               |
      | updated_at       | 2025-06-18T22:48:00Z                                                             |
      | virtual_size     | 16300544                                                                         |
      | visibility       | shared                                                                           |
      +------------------+----------------------------------------------------------------------------------+
    4. Import the image to az2:

      $ glance image-import 5073f3f0-e4ac-4cca-9883-fecade29a1f3 --stores az2 --import-method copy-image
      +-----------------------+----------------------------------------------------------------------------------+
      | Property              | Value                                                                            |
      +-----------------------+----------------------------------------------------------------------------------+
      | checksum              | b874c39491a2377b8490f5f1e89761a4                                                 |
      | container_format      | bare                                                                             |
      | created_at            | 2025-06-18T22:47:10Z                                                             |
      | disk_format           | raw                                                                              |
      | id                    | 5073f3f0-e4ac-4cca-9883-fecade29a1f3                                             |
      | min_disk              | 0                                                                                |
      | min_ram               | 0                                                                                |
      | name                  | cirros-test                                                                      |
      | os_glance_import_task | 7a141f7f-409a-4ea9-b66f-0658a83e907b                                             |
      | os_hash_algo          | sha512                                                                           |
      | os_hash_value         | 6b813aa46bb90b4da216a4d19376593fa3f4fc7e617f03a92b7fe11e9a3981cbe8f0959dbebe3622 |
      |                       | 5e5f53dc4492341a4863cac4ed1ee0909f3fc78ef9c3e869                                 |
      | os_hidden             | False                                                                            |
      | owner                 | 439f0ee839144b4c8640b9153a596a30                                                 |
      | protected             | False                                                                            |
      | size                  | 16300544                                                                         |
      | status                | active                                                                           |
      | stores                | az1                                                                              |
      | tags                  | []                                                                               |
      | updated_at            | 2025-06-18T22:48:00Z                                                             |
      | virtual_size          | 16300544                                                                         |
      | visibility            | shared                                                                           |
      +-----------------------+----------------------------------------------------------------------------------+
    5. Observe that the image is in the stores for az1 and az2:

      sh-5.1$ openstack image show 5073f3f0-e4ac-4cca-9883-fecade29a1f3 | grep stores
      | properties       | os_glance_failed_import='', os_glance_importing_to_stores='', os_hash_algo='sha512', os_hash_value='6b813aa46bb90b4da216a4d19376593fa3f4fc7e617f03a92b7fe11e9a3981cbe8f0959dbebe36225e5f53dc4492341a4863cac4ed1ee0909f3fc78ef9c3e869', os_hidden='False', stores='az1,az2' |
      sh-5.1$
    6. Observe the Block Storage service volumes in az1 and az2, which store the image ID from the previous step:

      sh-5.1$ openstack volume list --all | grep 5073f3f0-e4ac-4cca-9883-fecade29a1f3
      | 713f25b2-accc-4f4f-b5ba-8c71e2fac3aa | image-5073f3f0-e4ac-4cca-9883-fecade29a1f3 | available |    1 |                                 |
      | b1917ec2-2d8e-42cc-b58c-55db48f1e054 | image-5073f3f0-e4ac-4cca-9883-fecade29a1f3 | available |    1 |                                 |
      sh-5.1$
    7. Confirm the AZ, type, and host of each volume that backs the image:

      sh-5.1$ openstack volume show b1917ec2-2d8e-42cc-b58c-55db48f1e054 -c type -c availability_zone -c os-vol-host-attr:host
      +-----------------------+---------------------------------------------------------------+
      | Field                 | Value                                                         |
      +-----------------------+---------------------------------------------------------------+
      | availability_zone     | az1                                                           |
      | os-vol-host-attr:host | cinder-cbca0-volume-ontap-iscsi-az1-0@ontap#cinder_iscsi_pool |
      | type                  | glance-ontap-az1                                              |
      +-----------------------+---------------------------------------------------------------+
      sh-5.1$ openstack volume show 713f25b2-accc-4f4f-b5ba-8c71e2fac3aa -c type -c availability_zone -c os-vol-host-attr:host
      +-----------------------+---------------------------------------------------------------+
      | Field                 | Value                                                         |
      +-----------------------+---------------------------------------------------------------+
      | availability_zone     | az2                                                           |
      | os-vol-host-attr:host | cinder-cbca0-volume-ontap-iscsi-az2-0@ontap#cinder_iscsi_pool |
      | type                  | glance-ontap-az2                                              |
      +-----------------------+---------------------------------------------------------------+
      sh-5.1$
  7. If you are using the Shared File Systems service (manila) with Red Hat Ceph Storage, confirm the manila-share service is running in each AZ:

    $ openstack share service list
    +----+------------------+----------------------+------+---------+-------+----------------------------+
    | ID | Binary       	| Host             	| Zone | Status  | State | Updated At             	|
    +----+------------------+----------------------+------+---------+-------+----------------------------+
    |  1 | manila-scheduler | hostgroup        	| nova | enabled | up	| 2025-05-21T14:10:08.498066 |
    |  4 | manila-share 	| hostgroup@cephfs_az3 | az3  | enabled | up	| 2025-05-21T14:10:04.973221 |
    |  7 | manila-share 	| hostgroup@cephfs_az1 | az1  | enabled | up	| 2025-05-21T14:10:05.228611 |
    | 10 | manila-share 	| hostgroup@cephfs_az2 | az2  | enabled | up	| 2025-05-21T14:10:04.724407 |
    +----+------------------+----------------------+------+---------+-------+----------------------------+
  8. If you are using the Shared File Systems service with third-party storage and DHSS is disabled, confirm the following.

    1. As an administrator, create a share type:

      $ openstack share type create nfs-multiaz False --extra-specs share_backend_name=nfs_az
    2. Observe the share type:

      $ openstack share type list
      +--------------------------------------+--------------+------------+------------+--------------------------------------+-----------------------------+-------------+
      | ID                                   | Name         | Visibility | Is Default | Required Extra Specs                 | Optional Extra Specs        | Description |
      +--------------------------------------+--------------+------------+------------+--------------------------------------+-----------------------------+-------------+
      | a63ecc00-33f5-4bd8-9d93-a1fb31f2fe79 | nfs-multiaz  | public     | False      | driver_handles_share_servers : False | share_backend_name : nfs_az | None        |
      +--------------------------------------+--------------+------------+------------+--------------------------------------+-----------------------------+-------------+
    3. You can run the following commands as a non-admin user. Create a share in each AZ:

      openstack share create nfs 1 --name nfsaz1 --availability-zone az1
      openstack share create nfs 1 --name nfsaz2 --availability-zone az2
      openstack share create nfs 1 --name nfsaz3 --availability-zone az3
      • The openstack share create command does not need to pass --share-type nfs-multiaz because of the following configuration in the manila template in the OpenStackControlPlane CR:

              manilaAPI:
                customServiceConfig: |
                  [DEFAULT]
                  storage_availability_zone = az1,az2,az3
                  default_share_type = nfs-multiaz

        If the default_share_type field is not set, then pass the --share-type nfs-multiaz option with the openstack share create command.

    4. Observe the shares and their AZs:

      $ openstack share list
      +--------------------------------------+--------+------+-------------+-----------+-----------+-----------------+-------------------------------+-------------------+
      | ID                                   | Name   | Size | Share Proto | Status    | Is Public | Share Type Name | Host                          | Availability Zone |
      +--------------------------------------+--------+------+-------------+-----------+-----------+-----------------+-------------------------------+-------------------+
      | ef1fa319-4661-4d18-880b-a24b5e708234 | nfsaz1 |    1 | NFS         | available | False     | nfs-multiaz     | hostgroup@nfs_az1#n2_nvme_15T | az1               |
      | f91e1e59-e101-4e1f-8b92-fb49edd706cf | nfsaz2 |    1 | NFS         | available | False     | nfs-multiaz     | hostgroup@nfs_az2#n2_nvme_15T | az2               |
      | 09c9f709-1cf6-4279-ba39-8c80669b118a | nfsaz3 |    1 | NFS         | available | False     | nfs-multiaz     | hostgroup@nfs_az3#n2_nvme_15T | az3               |
      +--------------------------------------+--------+------+-------------+-----------+-----------+-----------------+-------------------------------+-------------------+
    5. Get the NFS paths of a share:

      Example:

      $ openstack share show nfsaz1 | grep path
      |                                       | path = 10.0.0.42:/share_9afcc8c4_2a09_4d05_8cdc_f28eb58a3dff
    6. Grant access to the appropriate IP range based on the Compute service (nova) instance to access the share:

      $ openstack share access create nfsaz1 ip 10.0.0.0/24 --access-level rw
    7. When you deploy the Compute service, you can then mount the share by using a command like the following:

      Example:

      $ mount -t nfs 10.0.0.42:/share_<UUID> /mnt/share
  9. If you are using the Shared File Systems service with third-party storage and DHSS is enabled, confirm the following.

    1. As an OpenStack administrator, create a share type:

      $ openstack share type create nfs-multiaz true --extra-specs share_backend_name='nfs_az'
    2. As a project administrator, create a network and subnet for the Shared File Systems service to use:

      $ openstack network create --project rhoso --provider-network-type vlan manila_net
      openstack subnet create --network manila_net --subnet-range <subnet> --dns-nameserver <dns> manila-subnetaz1
    3. As a project user, create the share network by using the values from the previous step and define a share for each AZ:

      $ openstack share network create --name share_net --neutron-net-id <net_id> --neutron-subnet-id <sub_net_id>
      $ openstack share create nfs 1 --name nfsaz1 --share-network share_net --availability-zone az1
      $ openstack share create nfs 1 --name nfsaz2 --share-network share_net --availability-zone az2
      $ openstack share create nfs 1 --name nfsaz3 --share-network share_net --availability-zone az3
      • In this example, the share network does not have an AZ set. It is a default share network that spans all AZs.
      • You can create Networking service (neutron) networks and subnets for each AZ and limit share creation in AZs to their corresponding networks and subnets. To isolate share creation, create multiple share network subnets in the Shared File Systems service, each with its own Networking service network, subnet, and AZ.
      • The openstack share create command does not need to pass --share-type nfs-multiaz because of the following configuration in the manila template in the OpenStackControlPlane CR:

              manilaAPI:
                customServiceConfig: |
                  [DEFAULT]
                  storage_availability_zone = az1,az2,az3
                  default_share_type = nfs-multiaz

        If the default_share_type field is not set, then pass the --share-type nfs-multiaz option with the openstack share create command.

    4. Grant access to the appropriate IP range based on the Compute service instance to access the share:

      $ openstack share access create nfsaz1 ip 10.0.0.0/24 --access-level rw
    5. When you deploy the Compute service, you can then mount the share by using a command like the following:

      Example:

      $ mount -t nfs 10.0.0.42:/share_<UUID> /mnt/share
  10. Exit the OpenStackClient pod:

    $ exit

6.7. Additional resources

Chapter 7. Creating the distributed data plane

The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 or 9.6 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles.

To create and deploy a distributed data plane, you must perform the following tasks:

  1. Create a Secret CR for each node set for Ansible to use to execute commands on the data plane nodes.
  2. Create a custom OpenStackDataPlaneService CR for each set of Compute nodes.
  3. Create the OpenStackDataPlaneNodeSet CRs that define the nodes and layout of the data plane.
  4. Create the OpenStackDataPlaneDeployment CR that triggers the Ansible execution that deploys and configures the software for the specified list of OpenStackDataPlaneNodeSet CRs.

The following procedure creates a node set for each zone with pre-provisioned nodes. You can use the procedures to create sets of Compute nodes or Networker nodes.

Note

Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7, 8, and 9. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using a later version of Red Hat Ceph Storage, adjust the configuration examples accordingly.

7.1. Prerequisites

  • A functional distributed control plane, created with the OpenStack Operator. For more information, see Creating the distributed control plane.
  • You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges.
  • You have used your own tooling to install the operating system on all the nodes you are adding to the data plane.

7.2. Creating the data plane secrets

You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.

To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:

  • An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each OpenStackDataPlaneNodeSet CR in your data plane.

    • An SSH key to enable migration of instances between Compute nodes.

Prerequisites

  • Pre-provisioned nodes are configured with an SSH public key in the $HOME/.ssh/authorized_keys file for a user with passwordless sudo privileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.

Procedure

  1. For unprovisioned nodes, create the SSH key pair for Ansible:

    $ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
    • Replace <key_file_name> with the name to use for the key pair.
  2. Create the Secret CR for Ansible and apply it to the cluster:

    $ oc create secret generic dataplane-ansible-ssh-private-key-secret \
    --save-config \
    --dry-run=client \
    --from-file=ssh-privatekey=<key_file_name> \
    --from-file=ssh-publickey=<key_file_name>.pub \
    [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \
    -o yaml | oc apply -f -
    • Replace <key_file_name> with the name and location of your SSH key pair file.
    • Optional: Only include the --from-file=authorized_keys option for bare-metal nodes that must be provisioned when creating the data plane.
  3. If you are creating Compute nodes, create a secret for migration.

    1. Create the SSH key pair for instance migration:

      $ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
    2. Create the Secret CR for migration and apply it to the cluster:

      $ oc create secret generic nova-migration-ssh-key \
      --save-config \
      --from-file=ssh-privatekey=nova-migration-ssh-key \
      --from-file=ssh-publickey=nova-migration-ssh-key.pub \
      -n openstack \
      -o yaml | oc apply -f -
  4. For nodes that have not been registered to the Red Hat Customer Portal, create the Secret CR for subscription-manager credentials to register the nodes:

    $ oc create secret generic subscription-manager \
    --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
    • Replace <subscription_manager_username> with the username you set for subscription-manager.
    • Replace <subscription_manager_password> with the password you set for subscription-manager.
  5. Create a Secret CR that contains the Red Hat registry credentials:

    $ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
    • Replace <username> and <password> with your Red Hat registry username and password credentials.

      For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.

  6. If you are creating Compute nodes, create a secret for libvirt.

    1. Create a file on your workstation named secret_libvirt.yaml to define the libvirt secret:

      apiVersion: v1
      kind: Secret
      metadata:
       name: libvirt-secret
       namespace: openstack
      type: Opaque
      data:
       LibvirtPassword: <base64_password>
      • Replace <base64_password> with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:

        $ echo -n <password> | base64
        Tip

        If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

    2. Create the Secret CR:

      $ oc apply -f secret_libvirt.yaml -n openstack
  7. Verify that the Secret CRs are created:

    $ oc describe secret dataplane-ansible-ssh-private-key-secret
    $ oc describe secret nova-migration-ssh-key
    $ oc describe secret subscription-manager
    $ oc describe secret redhat-registry
    $ oc describe secret libvirt-secret

7.3. Creating the custom nova service

To create the custom services required for the Compute node sets, complete the following tasks:

  1. Create the ConfigMap custom resources (CRs) to configure the nodes.
  2. Create a custom service for the node set that runs the playbook for the service.
  3. Include the ConfigMap CRs in the custom service.

You must create a unique ConfigMap and custom Compute service for each availability zone (AZ).

Repeat the following procedure for each AZ.

Procedure

  1. Create a file named nova-extra-config-az1.yaml on your workstation to define the ConfigMap CR for the Compute node set for the AZ.
  2. Define a new configuration file to apply to the Compute nodes in the node set for the AZ that adds the Image service (glance) endpoint for the local AZ and sets the cross_az_attach field to false.

    • If you are using Red Hat Ceph Storage, include RBD options:

      Example:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: nova-extra-config-az1
      data:
        03-ceph-nova.conf:
          [libvirt]
          images_type = rbd
          images_rbd_pool = vms
          images_rbd_ceph_conf = /etc/ceph/az1.conf
          images_rbd_glance_store_name = az1
          images_rbd_glance_copy_poll_interval = 15
          images_rbd_glance_copy_timeout = 600
          rbd_user = openstack
          rbd_secret_uuid=$FSID
          hw_disk_discard = unmap
          [glance]
          endpoint_override = https://glance-az1-internal.openstack.svc:9292
          valid_interfaces = internal
          [cinder]
          cross_az_attach = False
          catalog_info = volumev3:cinderv3:internalURL
    • If you are using a third-party storage device with multipath, then set volume_use_multipath to True:

      Example:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: nova-extra-config-az1
      data:
        25-nova-extra.conf: |
          [libvirt]
          volume_use_multipath = True
          [glance]
          endpoint_override = https://glance-az1-internal.openstack.svc:9292
          valid_interfaces = internal
          [cinder]
          cross_az_attach = False
          catalog_info = volumev3:cinderv3:internalURL
    • data.<filename>: The file name must follow the naming convention of ##-<name>-nova.conf. Files are evaluated by the Compute service alphabetically. A filename that starts with 01 will be evaluated by the Compute service before a filename that starts with 02. When the same configuration option occurs in multiple files, the configuration option is set to the last value read.
    • rbd_secret_uuid: The $FSID value should contain the actual FSID as described in Obtaining the Red Hat Ceph Storage file system identifier in Configuring persistent storage. The FSID itself does not need to be considered secret.

    When the service is deployed, it adds the configuration to the etc/<service>/<service>.conf.d/ directory in the service container. For example, for a Compute feature, the configuration file is added to etc/nova/nova.conf.d/ in the nova_compute container.

    + For more information on creating ConfigMap objects, see Creating and using config maps in the RHOCP Nodes guide.

  3. Create the ConfigMap CR:

    $ oc create -f nova-extra-config-az1.yaml
  4. Create a file named nova-custom-az1.yaml on your workstation to define the OpenStackDataPlaneService CR for the Compute node set for the AZ:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: nova-custom-az1
    spec:
      addCertMounts: false
      caCerts: combined-ca-bundle
      edpmServiceType: nova
      playbook: osp.edpm.nova
      tlsCerts:
        default:
          contents:
          - dnsnames
          - ips
          edpmRoleServiceName: nova
          issuer: osp-rootca-issuer-internal
          networks:
          - ctlplane
  5. Add the ConfigMap CR to the custom service:

      dataSources:
      - configMapRef:
          name: ceph-nova-az1
      - secretRef:
          name: nova-migration-ssh-key
  6. Specify the Secret CR for the cell that the node set that runs this service connects to:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: <nodeset>-service
    spec:
      ...
      dataSources:
        - configMapRef:
            name: feature-configmap
        - secretRef:
            name: nova-migration-ssh-key
        - secretRef:
            name: nova-cell1-compute-config
  7. Create the custom service:

    $ oc create -f nova-custom-az1.yaml

Define an OpenStackDataPlaneNodeSet custom resource (CR) for the pre-provisioned data plane nodes for each zone in your deployment. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate. The following procedure creates a node set for Zone 1. Repeat the procedure to create the required node sets for Compute nodes and Networker nodes for each zone.

Procedure

  1. Create a file on your workstation to define the OpenStackDataPlaneNodeSet CR for the node set for Zone 1, for example, compute_node_set_zone1.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: compute-node-set-zone1
      namespace: openstack
    spec:
      env:
      - name: ANSIBLE_FORCE_COLOR
        value: "True"
      - name: ANSIBLE_TIMEOUT
        value: "60"
      - name: ANSIBLE_SSH_TIMEOUT
        value: "60"
      - name: ANSIBLE_SSH_RETRIES
        value: "60"
    • metadata.name: The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set.
    • spec.env: An optional field that lists the environment variables to pass to the pod.
  2. Add the list of services to execute for this node set:

    • Specify the following services for a set of Compute nodes, replacing the nova service with the custom service you created for the node set:

        services:
        - download-cache
        - bootstrap
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - frr
        - ssh-known-hosts
        - run-os
        - reboot-os
        - install-certs
        - ceph-client
        - ovn
        - neutron-metadata
        - libvirt
        - *nova-custom-az1*
    • Specify the following services for a set of Networker nodes:

        services:
        - download-cache
        - bootstrap
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - frr
        - ssh-known-hosts
        - run-os
        - reboot-os
        - install-certs
        - ovn
        - neutron-metadata
        - ovn-bgp-agent
  3. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
      - ctlplane
  4. Specify that the nodes in this set are pre-provisioned:

      preProvisioned: true
  5. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    • Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  6. If you are creating a set of Compute nodes, enable Compute service access to the Red Hat Ceph Storage secret:

      nodeTemplate:
        ...
        extraMounts:
        - extraVolType: Ceph
          mounts:
          - mountPath: /etc/ceph
            name: ceph
            readOnly: true
          volumes:
          - name: ceph
            secret:
              secretName: ceph-conf-files-az1
    Note

    Networker nodes do not require access to the Red Hat Ceph Storage secret.

  7. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
  8. Specify the Secret CRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ...
        ansible:
          ansibleUser: cloud-admin
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars:
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
  9. Configure the networks that the OVN BGP agent uses to communicate with FRRounting (FRR) on the data plane, and the password to authenticate with the BGP peer:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            edpm_frr_bfd: false
            edpm_frr_bgp_ipv4_src_network: bgpmainnet
            edpm_frr_bgp_ipv6_src_network: bgpmainnetv6
            edpm_frr_bgp_neighbor_password: f00barZ
  10. If you are creating a set of Networker nodes, enable the edpm_enable_chassis_gw field:

        ansible:
          ...
          ansibleVars:
            ...
            edpm_enable_chassis_gw: true
  11. Add the network configuration template to apply to your nodes:

            edpm_network_config_hide_sensitive_logs: false
            edpm_network_config_os_net_config_mappings:
              edpm-z1-compute-0:
                nic1: 6a:fe:54:3f:8a:02
              edpm-z1-compute-1:
                nic1: 6b:fe:54:3f:8a:02
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth1
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                use_dhcp: false
              - type: interface
                name: nic1
                use_dhcp: true
                defroute: false
              - type: interface
                name: nic2
                use_dhcp: false
                defroute: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
              - type: interface
                name: nic3
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet0_ip') }}/30
              - type: interface
                name: nic4
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet1_ip') }}/30
              - type: interface
                name: lo
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpmainnet_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'bgpmainnetv6_ip') }}/128
                - ip_netmask: {{ lookup('vars', 'internalapi_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'storage_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'tenant_ip') }}/32
    • nic1: Update to the MAC address assigned to the NIC to use for network configuration on the Compute node.
    • ip_netmask: {{ lookup('vars', 'storage_ip') }}/32: Not required for sets of Networker nodes.
  12. Disable validations that are not required when using BGP:

        	  edpm_nodes_validation_validate_controllers_icmp: false
        	  edpm_nodes_validation_validate_gateway_icmp: false
  13. Configure the OVN BGP Agent to not expose tenant networks:

        	  edpm_ovn_bgp_agent_expose_tenant_networks: false
  14. Configure OVN to establish tunnels over BGP by using the bgpmainnet instead of the local tenant network:

            edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}
  15. Define each node in this node set:

      nodes:
        edpm-z1-compute-0:
          hostName: edpm-z1-compute-0
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
          - name: Bgpnet0
            subnetName: subnet1
            fixedIP: 100.64.0.2
          - name: Bgpnet1
            subnetName: subnet1
            fixedIP: 100.65.0.2
          - name: Bgpmainnet
            subnetName: subnet1
            fixedIP: 99.99.0.2
          - name: BgpmainnetV6
            subnetName: subnet0
            fixedIP: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0012
          ansible:
            ansibleHost: 192.168.122.100
            ansibleVars:
              edpm_frr_bgp_peers:
              - 100.64.0.1
              - 100.65.0.1
              edpm_ovn_bgp_agent_local_ovn_peer_ips:
              - 100.64.0.1
              - 100.65.0.1
        edpm-z1-compute-1:
          hostName: edpm-z1-compute-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
          - name: Bgpnet0
            subnetName: subnet1
            fixedIP: 100.64.0.6
          - name: Bgpnet1
            subnetName: subnet1
            fixedIP: 100.65.0.6
          - name: Bgpmainnet
            subnetName: subnet1
            fixedIP: 99.99.0.3
          - name: BgpmainnetV6
            subnetName: subnet0
            fixedIP: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0013
        edpm-z1-compute-2:
          hostName: edpm-z1-compute-2
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.102
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
          - name: Bgpnet0
            subnetName: subnet1
            fixedIP: 100.64.0.10
          - name: Bgpnet1
            subnetName: subnet1
            fixedIP: 100.65.0.10
          - name: Bgpmainnet
            subnetName: subnet1
            fixedIP: 99.99.0.4
          - name: BgpmainnetV6
            subnetName: subnet0
            fixedIP: f00d:f00d:f00d:f00d:f00d:f00d:f00d:0014
    • nodes.<node_ref>: The node definition reference, for example, edpm-compute-0. Each node in the node set must have a node definition.
    • networks: Defines the IPAM and the DNS records for the node.
    • networks.fixedIP: The storage network is not required by sets of Networker nodes.
    • networks.name: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    • ansible.ansibleVars: Node-specific Ansible variables that customize the node.

      Note
      • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
      • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
      • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".
  16. Save the definition file.
  17. Create the node set for Zone 1:

    $ oc create --save-config -f compute_node_set_zone1.yaml -n openstack
  18. Verify that the resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset compute-node-set-zone1 --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

  19. Verify that the Secret resource was created for the node set:

    $ oc get secret | grep openstack-data-plane
    compute-node-set-zone1 Opaque 1 3m50s
  20. Verify the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    bootstrap           46m
    ceph-client         46m
    ceph-hci-pre        46m
    configure-network   46m
    configure-os        46m
    ...

7.5. Deploying the data plane

You use the OpenStackDataPlaneDeployment custom resource definition (CRD) to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. Create an OpenStackDataPlaneDeployment CR to deploy each of your OpenStackDataPlaneNodeSet CRs.

Note

When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Remove any failed OpenStackDataPlaneDeployment CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.

Procedure

  1. Create a file on your workstation named openstack_data_plane_deploy.yaml to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: data-plane-deploy
      namespace: openstack
    • metadata.name: The OpenStackDataPlaneDeployment CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
  2. Add all the OpenStackDataPlaneNodeSet CRs that you want to deploy:

    spec:
      nodeSets:
        - openstack-data-plane
        - <nodeSet_name>
        - ...
        - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  3. Save the openstack_data_plane_deploy.yaml deployment file.
  4. Deploy the data plane:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
  5. Verify that the data plane is deployed:

    $ oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value>
    $ oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>
    • Replace <timeout_value> with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value 60m. If the completion status of SetupReady for oc wait openstackdataplanedeployment or NodeSetReady for oc wait openstackdataplanenodeset is not returned in this time frame, the command returns a timeout error. Use a value that is appropriate to the size of your deployment. Give larger deployments more time to complete deployment tasks.

      For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  6. Map the Compute nodes to the Compute cell that they are connected to:

    $ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose

    If you did not create additional cells, this command maps the Compute nodes to cell1.

  7. Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list

    If some Compute nodes are missing from the hypervisor list, retry the previous step. If the Compute nodes are still missing from the list, check the status and health of the nova-compute services on the deployed data plane nodes.

  8. Verify that the hypervisor hostname is a fully qualified domain name (FQDN):

    $ hostname -f

    If the hypervisor hostname is not an FQDN, for example, if it was registered as a short name or full name instead, contact Red Hat Support.

If you are using third-party storage, verify that the Compute service is using the third-party storage for the Image service (glance) and Block Storage service (cinder) in each availability zone (AZ).

Procedure

  1. Verify that the Compute service is using the third-party storage for the Image service and the Block Storage service in each AZ. In this example, you have Compute nodes in all three AZs, and you have put them into their own aggregates by using a script like the following:

    Example:

    function make_aggregate() {
        OS="oc rsh openstackclient openstack"
        AZ=az$1
        RACK=r$1
        $OS aggregate create $AZ
        $OS aggregate set --zone $AZ $AZ
        for I in $(seq 0 2); do
            $OS aggregate add host $AZ ${RACK}-compute-${I}.ctlplane.example.com
        done
        $OS compute service list -c Host -c Zone
    }
    
    make_aggregate 1
    make_aggregate 2
    make_aggregate 3
  2. Open a remote shell connection to the openstackclient pod:

    $ oc rsh -n openstack openstackclient
  3. Create an ephemeral instance in AZ1 by using the image created earlier:

    $ openstack server create --flavor c1 --image 3ae0d8d8-18f5-452e-a58e-63fb7aba308a --nic net-id=private --availability-zone az1 vm-az1
  4. Observe the instance:

    sh-5.1$ openstack server list
    +--------------------------------------+--------+--------+----------------------+------------------+--------+
    | ID                                   | Name   | Status | Networks             | Image            | Flavor |
    +--------------------------------------+--------+--------+----------------------+------------------+--------+
    | c1f8b608-41d4-422a-ab89-82762353a784 | vm-az1 | ACTIVE | private=192.168.0.21 | cirros-priv-type | c1     |
    +--------------------------------------+--------+--------+----------------------+------------------+--------+
  5. Create a volume from an image in AZ2 by using the image which was imported into AZ2 earlier:

    $ openstack volume create --size 8 vm_root_az2 --image 3ae0d8d8-18f5-452e-a58e-63fb7aba308a --availability-zone az2
  6. Observe the volume:

    sh-5.1$ openstack volume list
    +--------------------------------------+-------------+-----------+------+-------------+
    | ID                                   | Name        | Status    | Size | Attached to |
    +--------------------------------------+-------------+-----------+------+-------------+
    | 119958b1-0fca-4412-81d1-1d9b856ce85a | vm_root_az2 | available |    8 |             |
    +--------------------------------------+-------------+-----------+------+-------------+
  7. Create an instance in AZ2 with the Block Storage service volume from the previous step as the root volume:

    $ openstack server create --flavor c1 --volume 119958b1-0fca-4412-81d1-1d9b856ce85a --nic net-id=private --availability-zone az2 vm-az2
  8. Observe the instances:

    sh-5.1$ openstack server list
    +--------------------------------------+--------+--------+-----------------------+--------------------------+--------+
    | ID                                   | Name   | Status | Networks              | Image                    | Flavor |
    +--------------------------------------+--------+--------+-----------------------+--------------------------+--------+
    | 54e1a625-08d1-4df5-838c-c8ea676d2e06 | vm-az2 | ACTIVE | private=192.168.0.138 | N/A (booted from volume) | c1     |
    | c1f8b608-41d4-422a-ab89-82762353a784 | vm-az1 | ACTIVE | private=192.168.0.21  | cirros-priv-type         | c1     |
    +--------------------------------------+--------+--------+-----------------------+--------------------------+--------+
    sh-5.1$
  9. Exit the openstackclient pod:

    $ exit

7.7. Data plane conditions and states

Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress.

For an OpenStackDataPlaneNodeSet, until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False. When the deployment succeeds, the Ready condition is set to True. A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True.

Expand
Table 7.1. OpenStackDataPlaneNodeSet CR conditions
ConditionDescription

Ready

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

SetupReady

"True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed.

DeploymentReady

"True": The NodeSet has been successfully deployed.

InputReady

"True": The required inputs are available and ready.

NodeSetDNSDataReady

"True": DNSData resources are ready.

NodeSetIPReservationReady

"True": The IPSet resources are ready.

NodeSetBaremetalProvisionReady

"True": Bare-metal nodes are provisioned and ready.

Expand
Table 7.2. OpenStackDataPlaneNodeSet status fields
Status fieldDescription

Deployed

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

DNSClusterAddresses

 

CtlplaneSearchDomain

 
Expand
Table 7.3. OpenStackDataPlaneDeployment CR conditions
ConditionDescription

Ready

  • "True": The data plane is successfully deployed.
  • "False": The data plane deployment failed, or there are other failed conditions.

DeploymentReady

"True": The data plane is successfully deployed.

InputReady

"True": The required inputs are available and ready.

<NodeSet> Deployment Ready

"True": The deployment has succeeded for the named NodeSet, indicating all services for the NodeSet have succeeded.

<NodeSet> <Service> Deployment Ready

"True": The deployment has succeeded for the named NodeSet and Service. Each <NodeSet> <Service> Deployment Ready specific condition is set to "True" as that service completes successfully for the named NodeSet. Once all services are complete for a NodeSet, the <NodeSet> Deployment Ready condition is set to "True". The service conditions indicate which services have completed their deployment, or which services failed and for which NodeSets.

Expand
Table 7.4. OpenStackDataPlaneDeployment status fields
Status fieldDescription

Deployed

  • "True": The data plane is successfully deployed. All Services for all NodeSets have succeeded.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.
Expand
Table 7.5. OpenStackDataPlaneService CR conditions
ConditionDescription

Ready

"True": The service has been created and is ready for use. "False": The service has failed to be created.

To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.

Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.

Procedure

  1. Determine the name and status of all deployments:

    $ oc get openstackdataplanedeployment

    The following example output shows two deployments currently in progress:

    $ oc get openstackdataplanedeployment
    
    NAME                   NODESETS             STATUS   MESSAGE
    edpm-compute   ["openstack-edpm-ipam"]   False    Deployment in progress
  2. Retrieve and inspect Ansible execution jobs.

    The Kubernetes jobs are labelled with the name of the OpenStackDataPlaneDeployment. You can list jobs for each OpenStackDataPlaneDeployment by using the label:

     $ oc get job -l openstackdataplanedeployment=edpm-compute
     NAME                                                 STATUS     COMPLETIONS   DURATION   AGE
     bootstrap-edpm-compute-openstack-edpm-ipam           Complete   1/1           78s        25h
     configure-network-edpm-compute-openstack-edpm-ipam   Complete   1/1           37s        25h
     configure-os-edpm-compute-openstack-edpm-ipam        Complete   1/1           66s        25h
     download-cache-edpm-compute-openstack-edpm-ipam      Complete   1/1           64s        25h
     install-certs-edpm-compute-openstack-edpm-ipam       Complete   1/1           46s        25h
     install-os-edpm-compute-openstack-edpm-ipam          Complete   1/1           57s        25h
     libvirt-edpm-compute-openstack-edpm-ipam             Complete   1/1           2m37s      25h
     neutron-metadata-edpm-compute-openstack-edpm-ipam    Complete   1/1           61s        25h
     nova-edpm-compute-openstack-edpm-ipam                Complete   1/1           3m20s      25h
     ovn-edpm-compute-openstack-edpm-ipam                 Complete   1/1           78s        25h
     run-os-edpm-compute-openstack-edpm-ipam              Complete   1/1           33s        25h
     ssh-known-hosts-edpm-compute                         Complete   1/1           19s        25h
     telemetry-edpm-compute-openstack-edpm-ipam           Complete   1/1           2m5s       25h
     validate-network-edpm-compute-openstack-edpm-ipam    Complete   1/1           16s        25h

    You can check logs by using oc logs -f job/<job-name>, for example, if you want to check the logs from the configure-network job:

     $ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2
     PLAY RECAP *********************************************************************
     edpm-compute-0             : ok=22   changed=0    unreachable=0    failed=0    skipped=17   rescued=0    ignored=0
7.8.1.1. Job condition messages

AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried:

  • Job not started: The job has not started.
  • Job not found: The job could not be found.
  • Job is running: The job is currently running.
  • Job complete: The job execution is complete.
  • Job error occurred <error_message>: The job stopped executing unexpectedly. The <error_message> is replaced with a specific error message.

To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>. For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm.

7.8.2. Checking the logs for a node set

You can access the logs for a node set to check for deployment issues.

Procedure

  1. Retrieve pods with the OpenStackAnsibleEE label:

    $ oc get pods -l app=openstackansibleee
    configure-network-edpm-compute-j6r4l   0/1     Completed           0          3m36s
    validate-network-edpm-compute-6g7n9    0/1     Pending             0          0s
    validate-network-edpm-compute-6g7n9    0/1     ContainerCreating   0          11s
    validate-network-edpm-compute-6g7n9    1/1     Running             0          13s
  2. SSH into the pod you want to check:

    1. Pod that is running:

      $ oc rsh validate-network-edpm-compute-6g7n9
    2. Pod that is not running:

      $ oc debug configure-network-edpm-compute-j6r4l
  3. List the directories in the /runner/artifacts mount:

    $ ls /runner/artifacts
    configure-network-edpm-compute
    validate-network-edpm-compute
  4. View the stdout for the required artifact:

    $ cat /runner/artifacts/configure-network-edpm-compute/stdout

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top